Dec 05 19:03:41 crc systemd[1]: Starting Kubernetes Kubelet... Dec 05 19:03:41 crc restorecon[4672]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:41 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 19:03:42 crc restorecon[4672]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 19:03:42 crc restorecon[4672]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 05 19:03:42 crc kubenswrapper[4828]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 05 19:03:42 crc kubenswrapper[4828]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 05 19:03:42 crc kubenswrapper[4828]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 05 19:03:42 crc kubenswrapper[4828]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 05 19:03:42 crc kubenswrapper[4828]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 05 19:03:42 crc kubenswrapper[4828]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.267511 4828 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270450 4828 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270472 4828 feature_gate.go:330] unrecognized feature gate: Example Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270478 4828 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270483 4828 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270489 4828 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270494 4828 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270500 4828 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270505 4828 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270535 4828 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270540 4828 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270545 4828 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270550 4828 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270554 4828 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270561 4828 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270568 4828 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270575 4828 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270580 4828 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270585 4828 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270591 4828 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270597 4828 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270602 4828 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270608 4828 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270612 4828 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270618 4828 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270623 4828 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270628 4828 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270633 4828 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270637 4828 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270642 4828 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270647 4828 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270652 4828 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270657 4828 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270661 4828 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270666 4828 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270671 4828 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270677 4828 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270682 4828 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270687 4828 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270692 4828 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270697 4828 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270704 4828 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270711 4828 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270717 4828 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270724 4828 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270729 4828 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270734 4828 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270741 4828 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270747 4828 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270752 4828 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270757 4828 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270762 4828 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270767 4828 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270772 4828 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270776 4828 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270781 4828 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270786 4828 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270790 4828 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270795 4828 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270800 4828 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270804 4828 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270809 4828 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270814 4828 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270820 4828 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270842 4828 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270848 4828 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270855 4828 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270861 4828 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270866 4828 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270871 4828 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270876 4828 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.270881 4828 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.270974 4828 flags.go:64] FLAG: --address="0.0.0.0" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.270986 4828 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.270996 4828 flags.go:64] FLAG: --anonymous-auth="true" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271003 4828 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271012 4828 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271018 4828 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271025 4828 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271031 4828 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271037 4828 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271043 4828 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271049 4828 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271055 4828 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271061 4828 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271067 4828 flags.go:64] FLAG: --cgroup-root="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271072 4828 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271078 4828 flags.go:64] FLAG: --client-ca-file="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271083 4828 flags.go:64] FLAG: --cloud-config="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271089 4828 flags.go:64] FLAG: --cloud-provider="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271094 4828 flags.go:64] FLAG: --cluster-dns="[]" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271100 4828 flags.go:64] FLAG: --cluster-domain="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271106 4828 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271112 4828 flags.go:64] FLAG: --config-dir="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271117 4828 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271125 4828 flags.go:64] FLAG: --container-log-max-files="5" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271133 4828 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271139 4828 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271144 4828 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271150 4828 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271156 4828 flags.go:64] FLAG: --contention-profiling="false" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271161 4828 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271167 4828 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271173 4828 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271179 4828 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271185 4828 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271191 4828 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271197 4828 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271203 4828 flags.go:64] FLAG: --enable-load-reader="false" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271209 4828 flags.go:64] FLAG: --enable-server="true" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271214 4828 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271221 4828 flags.go:64] FLAG: --event-burst="100" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271227 4828 flags.go:64] FLAG: --event-qps="50" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271232 4828 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271238 4828 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271244 4828 flags.go:64] FLAG: --eviction-hard="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271250 4828 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271256 4828 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271261 4828 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271267 4828 flags.go:64] FLAG: --eviction-soft="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271272 4828 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271278 4828 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271283 4828 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271289 4828 flags.go:64] FLAG: --experimental-mounter-path="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271294 4828 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271300 4828 flags.go:64] FLAG: --fail-swap-on="true" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271305 4828 flags.go:64] FLAG: --feature-gates="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271312 4828 flags.go:64] FLAG: --file-check-frequency="20s" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271318 4828 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271323 4828 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271329 4828 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271335 4828 flags.go:64] FLAG: --healthz-port="10248" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271341 4828 flags.go:64] FLAG: --help="false" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271346 4828 flags.go:64] FLAG: --hostname-override="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271352 4828 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271359 4828 flags.go:64] FLAG: --http-check-frequency="20s" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271364 4828 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271370 4828 flags.go:64] FLAG: --image-credential-provider-config="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271375 4828 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271382 4828 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271387 4828 flags.go:64] FLAG: --image-service-endpoint="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271393 4828 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271399 4828 flags.go:64] FLAG: --kube-api-burst="100" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271405 4828 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271411 4828 flags.go:64] FLAG: --kube-api-qps="50" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271417 4828 flags.go:64] FLAG: --kube-reserved="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271423 4828 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271428 4828 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271434 4828 flags.go:64] FLAG: --kubelet-cgroups="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271439 4828 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271446 4828 flags.go:64] FLAG: --lock-file="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271452 4828 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271457 4828 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271463 4828 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271471 4828 flags.go:64] FLAG: --log-json-split-stream="false" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271476 4828 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271482 4828 flags.go:64] FLAG: --log-text-split-stream="false" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271488 4828 flags.go:64] FLAG: --logging-format="text" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271493 4828 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271499 4828 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271504 4828 flags.go:64] FLAG: --manifest-url="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271509 4828 flags.go:64] FLAG: --manifest-url-header="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271517 4828 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271523 4828 flags.go:64] FLAG: --max-open-files="1000000" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271529 4828 flags.go:64] FLAG: --max-pods="110" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271535 4828 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271540 4828 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271546 4828 flags.go:64] FLAG: --memory-manager-policy="None" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271552 4828 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271557 4828 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271563 4828 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271568 4828 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271580 4828 flags.go:64] FLAG: --node-status-max-images="50" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271586 4828 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271591 4828 flags.go:64] FLAG: --oom-score-adj="-999" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271597 4828 flags.go:64] FLAG: --pod-cidr="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271602 4828 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271611 4828 flags.go:64] FLAG: --pod-manifest-path="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271617 4828 flags.go:64] FLAG: --pod-max-pids="-1" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271623 4828 flags.go:64] FLAG: --pods-per-core="0" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271629 4828 flags.go:64] FLAG: --port="10250" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271634 4828 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271641 4828 flags.go:64] FLAG: --provider-id="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271646 4828 flags.go:64] FLAG: --qos-reserved="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271652 4828 flags.go:64] FLAG: --read-only-port="10255" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271657 4828 flags.go:64] FLAG: --register-node="true" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271662 4828 flags.go:64] FLAG: --register-schedulable="true" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271668 4828 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271683 4828 flags.go:64] FLAG: --registry-burst="10" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271688 4828 flags.go:64] FLAG: --registry-qps="5" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271694 4828 flags.go:64] FLAG: --reserved-cpus="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271699 4828 flags.go:64] FLAG: --reserved-memory="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271706 4828 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271712 4828 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271717 4828 flags.go:64] FLAG: --rotate-certificates="false" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271722 4828 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271728 4828 flags.go:64] FLAG: --runonce="false" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271734 4828 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271739 4828 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271745 4828 flags.go:64] FLAG: --seccomp-default="false" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271750 4828 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271756 4828 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271765 4828 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271771 4828 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271777 4828 flags.go:64] FLAG: --storage-driver-password="root" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271782 4828 flags.go:64] FLAG: --storage-driver-secure="false" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271787 4828 flags.go:64] FLAG: --storage-driver-table="stats" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271793 4828 flags.go:64] FLAG: --storage-driver-user="root" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271798 4828 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271804 4828 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271810 4828 flags.go:64] FLAG: --system-cgroups="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271815 4828 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271840 4828 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271846 4828 flags.go:64] FLAG: --tls-cert-file="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271851 4828 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271862 4828 flags.go:64] FLAG: --tls-min-version="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271867 4828 flags.go:64] FLAG: --tls-private-key-file="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271873 4828 flags.go:64] FLAG: --topology-manager-policy="none" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271879 4828 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271884 4828 flags.go:64] FLAG: --topology-manager-scope="container" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271890 4828 flags.go:64] FLAG: --v="2" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271897 4828 flags.go:64] FLAG: --version="false" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271904 4828 flags.go:64] FLAG: --vmodule="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271910 4828 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.271916 4828 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272042 4828 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272049 4828 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272055 4828 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272061 4828 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272067 4828 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272072 4828 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272077 4828 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272082 4828 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272088 4828 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272095 4828 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272100 4828 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272105 4828 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272110 4828 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272115 4828 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272120 4828 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272125 4828 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272131 4828 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272137 4828 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272142 4828 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272148 4828 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272153 4828 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272158 4828 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272163 4828 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272168 4828 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272176 4828 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272182 4828 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272188 4828 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272193 4828 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272198 4828 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272203 4828 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272208 4828 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272213 4828 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272218 4828 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272222 4828 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272227 4828 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272232 4828 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272237 4828 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272243 4828 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272250 4828 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272256 4828 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272261 4828 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272268 4828 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272273 4828 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272277 4828 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272282 4828 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272287 4828 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272292 4828 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272297 4828 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272303 4828 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272310 4828 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272315 4828 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272322 4828 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272327 4828 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272332 4828 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272337 4828 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272341 4828 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272349 4828 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272354 4828 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272359 4828 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272363 4828 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272368 4828 feature_gate.go:330] unrecognized feature gate: Example Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272373 4828 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272379 4828 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272384 4828 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272389 4828 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272394 4828 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272399 4828 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272404 4828 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272408 4828 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272413 4828 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.272418 4828 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.272594 4828 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.285819 4828 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.285916 4828 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286068 4828 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286090 4828 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286100 4828 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286109 4828 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286120 4828 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286129 4828 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286138 4828 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286147 4828 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286156 4828 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286164 4828 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286175 4828 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286186 4828 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286196 4828 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286205 4828 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286214 4828 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286223 4828 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286232 4828 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286241 4828 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286250 4828 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286258 4828 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286267 4828 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286275 4828 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286284 4828 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286293 4828 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286301 4828 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286309 4828 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286318 4828 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286326 4828 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286335 4828 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286343 4828 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286351 4828 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286362 4828 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286377 4828 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286387 4828 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286397 4828 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286406 4828 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286415 4828 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286423 4828 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286432 4828 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286440 4828 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286449 4828 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286457 4828 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286465 4828 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286474 4828 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286482 4828 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286491 4828 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286543 4828 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286551 4828 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286560 4828 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286569 4828 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286577 4828 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286586 4828 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286594 4828 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286603 4828 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286612 4828 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286621 4828 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286629 4828 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286637 4828 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286646 4828 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286654 4828 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286663 4828 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286672 4828 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286680 4828 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286692 4828 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286703 4828 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286712 4828 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286720 4828 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286731 4828 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286742 4828 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286753 4828 feature_gate.go:330] unrecognized feature gate: Example Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.286762 4828 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.286776 4828 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287102 4828 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287121 4828 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287131 4828 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287142 4828 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287153 4828 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287162 4828 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287172 4828 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287181 4828 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287190 4828 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287198 4828 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287207 4828 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287215 4828 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287223 4828 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287234 4828 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287242 4828 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287250 4828 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287258 4828 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287267 4828 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287276 4828 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287286 4828 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287295 4828 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287305 4828 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287313 4828 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287322 4828 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287330 4828 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287339 4828 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287347 4828 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287355 4828 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287364 4828 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287372 4828 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287381 4828 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287389 4828 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287397 4828 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287407 4828 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287415 4828 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287424 4828 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287432 4828 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287443 4828 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287454 4828 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287463 4828 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287471 4828 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287480 4828 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287491 4828 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287503 4828 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287514 4828 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287524 4828 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287534 4828 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287543 4828 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287552 4828 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287561 4828 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287570 4828 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287578 4828 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287586 4828 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287594 4828 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287603 4828 feature_gate.go:330] unrecognized feature gate: Example Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287612 4828 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287620 4828 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287628 4828 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287639 4828 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287650 4828 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287659 4828 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287668 4828 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287676 4828 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287685 4828 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287693 4828 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287701 4828 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287709 4828 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287718 4828 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287727 4828 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287735 4828 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.287747 4828 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.287760 4828 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.288538 4828 server.go:940] "Client rotation is on, will bootstrap in background" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.293534 4828 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.293686 4828 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.294656 4828 server.go:997] "Starting client certificate rotation" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.294709 4828 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.294958 4828 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-04 11:43:28.433867808 +0000 UTC Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.295025 4828 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 712h39m46.138845534s for next certificate rotation Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.304098 4828 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.306959 4828 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.317369 4828 log.go:25] "Validated CRI v1 runtime API" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.343638 4828 log.go:25] "Validated CRI v1 image API" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.345674 4828 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.349086 4828 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-05-18-59-32-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.349127 4828 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.371664 4828 manager.go:217] Machine: {Timestamp:2025-12-05 19:03:42.370137771 +0000 UTC m=+0.265360117 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:76f17a1f-8558-4034-87e9-acb0adb87b21 BootID:ef06a32a-0efd-4207-a13f-220645e4e6a2 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:c5:d7:8d Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:c5:d7:8d Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:8b:ef:77 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:5d:86:6b Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:35:51:03 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:ae:c1:0d Speed:-1 Mtu:1496} {Name:eth10 MacAddress:5e:b3:ef:fe:31:9a Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:76:a2:39:31:60:0d Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.371950 4828 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.372129 4828 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.372990 4828 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.373325 4828 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.373373 4828 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.373689 4828 topology_manager.go:138] "Creating topology manager with none policy" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.373708 4828 container_manager_linux.go:303] "Creating device plugin manager" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.374062 4828 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.374116 4828 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.374521 4828 state_mem.go:36] "Initialized new in-memory state store" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.374662 4828 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.375452 4828 kubelet.go:418] "Attempting to sync node with API server" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.375484 4828 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.375521 4828 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.375541 4828 kubelet.go:324] "Adding apiserver pod source" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.375558 4828 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.377669 4828 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.377796 4828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Dec 05 19:03:42 crc kubenswrapper[4828]: E1205 19:03:42.377899 4828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.377930 4828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Dec 05 19:03:42 crc kubenswrapper[4828]: E1205 19:03:42.378051 4828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.378129 4828 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.379273 4828 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.379954 4828 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.379981 4828 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.379990 4828 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.379999 4828 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.380014 4828 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.380025 4828 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.380034 4828 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.380050 4828 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.380061 4828 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.380071 4828 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.380084 4828 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.380093 4828 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.380250 4828 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.380733 4828 server.go:1280] "Started kubelet" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.381552 4828 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.381670 4828 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.381799 4828 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 05 19:03:42 crc systemd[1]: Started Kubernetes Kubelet. Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.383769 4828 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 05 19:03:42 crc kubenswrapper[4828]: E1205 19:03:42.383800 4828 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.98:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187e670fff450b7c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-05 19:03:42.380698492 +0000 UTC m=+0.275920808,LastTimestamp:2025-12-05 19:03:42.380698492 +0000 UTC m=+0.275920808,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.385577 4828 server.go:460] "Adding debug handlers to kubelet server" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.387114 4828 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.387171 4828 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.387471 4828 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 16:00:29.174581671 +0000 UTC Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.387529 4828 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 932h56m46.787055087s for next certificate rotation Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.388468 4828 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.388512 4828 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 05 19:03:42 crc kubenswrapper[4828]: E1205 19:03:42.388780 4828 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.388795 4828 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 05 19:03:42 crc kubenswrapper[4828]: E1205 19:03:42.389951 4828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="200ms" Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.389990 4828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Dec 05 19:03:42 crc kubenswrapper[4828]: E1205 19:03:42.390454 4828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.393591 4828 factory.go:55] Registering systemd factory Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.394436 4828 factory.go:221] Registration of the systemd container factory successfully Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.395056 4828 factory.go:153] Registering CRI-O factory Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.395103 4828 factory.go:221] Registration of the crio container factory successfully Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.395212 4828 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.395247 4828 factory.go:103] Registering Raw factory Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.395268 4828 manager.go:1196] Started watching for new ooms in manager Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.396226 4828 manager.go:319] Starting recovery of all containers Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.408654 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.408762 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.408796 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.408881 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.408926 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.408953 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.408977 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.409006 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.409037 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.409070 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.409095 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.409122 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.409162 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.409267 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.409295 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.409325 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.409350 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.409374 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.409399 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.409424 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.409450 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.409476 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.409503 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.409544 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.409578 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.409603 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.409634 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.409662 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.409686 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.409710 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.409735 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.409760 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.409888 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.409962 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.409992 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.410019 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.410091 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.410119 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.410156 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.410187 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.410214 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.410241 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.410269 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.412224 4828 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.413091 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414224 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414302 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414330 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414349 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414366 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414405 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414421 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414440 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414472 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414495 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414518 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414539 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414565 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414583 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414602 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414618 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414630 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414641 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414673 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414685 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414697 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414710 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414723 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414737 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414752 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414765 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414777 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414791 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414803 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414815 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414848 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414865 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414881 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414899 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414919 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414935 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414950 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414967 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414980 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.414998 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415016 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415032 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415051 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415066 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415078 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415093 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415107 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415126 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415139 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415154 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415168 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415181 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415194 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415208 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415222 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415237 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415249 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415264 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415276 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415289 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415309 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415323 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415345 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415359 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415372 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415386 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415404 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415419 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415436 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415449 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415461 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415472 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415486 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415498 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415513 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415524 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415536 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415547 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415560 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415572 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415584 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415595 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415609 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415621 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415633 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415646 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415658 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415670 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415683 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415697 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415709 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415721 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415733 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415745 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415756 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415768 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415783 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415795 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415809 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415841 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415908 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415931 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415950 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.415971 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416033 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416051 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416069 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416085 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416101 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416117 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416132 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416149 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416165 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416183 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416204 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416222 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416239 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416258 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416273 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416290 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416306 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416325 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416342 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416361 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416380 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416396 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416413 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416429 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416446 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416464 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416482 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416497 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416519 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416535 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416551 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416565 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416576 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416588 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416600 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416612 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416625 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416638 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416650 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416663 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416677 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416689 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416702 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416726 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416778 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416792 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416804 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416818 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416850 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416863 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416875 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416890 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416902 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416914 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416929 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416941 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416954 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416966 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416985 4828 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.416999 4828 reconstruct.go:97] "Volume reconstruction finished" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.417012 4828 reconciler.go:26] "Reconciler: start to sync state" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.417631 4828 manager.go:324] Recovery completed Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.430688 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.438415 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.438538 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.438559 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.440293 4828 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.440314 4828 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.440363 4828 state_mem.go:36] "Initialized new in-memory state store" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.441476 4828 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.444396 4828 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.444559 4828 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.445203 4828 kubelet.go:2335] "Starting kubelet main sync loop" Dec 05 19:03:42 crc kubenswrapper[4828]: E1205 19:03:42.445285 4828 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.445751 4828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Dec 05 19:03:42 crc kubenswrapper[4828]: E1205 19:03:42.445978 4828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.451028 4828 policy_none.go:49] "None policy: Start" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.454374 4828 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.454402 4828 state_mem.go:35] "Initializing new in-memory state store" Dec 05 19:03:42 crc kubenswrapper[4828]: E1205 19:03:42.488974 4828 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.503612 4828 manager.go:334] "Starting Device Plugin manager" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.503696 4828 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.503716 4828 server.go:79] "Starting device plugin registration server" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.504242 4828 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.504264 4828 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.504408 4828 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.504540 4828 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.504548 4828 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 05 19:03:42 crc kubenswrapper[4828]: E1205 19:03:42.514044 4828 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.545387 4828 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.545482 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.546412 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.546466 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.546484 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.546766 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.546896 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.546954 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.547724 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.547749 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.547758 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.547812 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.547850 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.547861 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.547884 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.548072 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.548097 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.548604 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.548651 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.548664 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.548776 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.548815 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.548842 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.549114 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.549256 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.549298 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.549950 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.549984 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.549995 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.550147 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.550160 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.550171 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.550300 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.550361 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.550403 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.551018 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.551057 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.551066 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.551169 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.551184 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.551222 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.551186 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.551310 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.551884 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.551911 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.551922 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:42 crc kubenswrapper[4828]: E1205 19:03:42.591562 4828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="400ms" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.608390 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.609274 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.609297 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.609306 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.609324 4828 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 19:03:42 crc kubenswrapper[4828]: E1205 19:03:42.609629 4828 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.98:6443: connect: connection refused" node="crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.619644 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.619673 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.619692 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.619708 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.619723 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.619737 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.619812 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.619892 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.619934 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.619957 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.619975 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.620015 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.620040 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.620057 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.620071 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.720634 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.720700 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.720734 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.720766 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.720795 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.720851 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.720807 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.720908 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.720936 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.720882 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.720880 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.720974 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.721017 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.721039 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.720982 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.721057 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.721072 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.721093 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.721112 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.721134 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.721152 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.721171 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.721191 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.721196 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.721228 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.721253 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.721275 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.721299 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.721298 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.721213 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.810449 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.811986 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.812062 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.812082 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.812120 4828 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 19:03:42 crc kubenswrapper[4828]: E1205 19:03:42.812660 4828 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.98:6443: connect: connection refused" node="crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.886083 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.901329 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.909193 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-bf27ef3ed7b74b712f1558aa9b79e5c442fb7501b9ec2632c4755a0478850ce8 WatchSource:0}: Error finding container bf27ef3ed7b74b712f1558aa9b79e5c442fb7501b9ec2632c4755a0478850ce8: Status 404 returned error can't find the container with id bf27ef3ed7b74b712f1558aa9b79e5c442fb7501b9ec2632c4755a0478850ce8 Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.918887 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.924951 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-36509f258f04e82f0cb4e3387d2a83aa3e82d05bb532b17d6a43a3a6122ed476 WatchSource:0}: Error finding container 36509f258f04e82f0cb4e3387d2a83aa3e82d05bb532b17d6a43a3a6122ed476: Status 404 returned error can't find the container with id 36509f258f04e82f0cb4e3387d2a83aa3e82d05bb532b17d6a43a3a6122ed476 Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.934764 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.939859 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-6967ca5d056818ee6135697b6897fe3ae2116a769a6d12249c52aaa81f12843e WatchSource:0}: Error finding container 6967ca5d056818ee6135697b6897fe3ae2116a769a6d12249c52aaa81f12843e: Status 404 returned error can't find the container with id 6967ca5d056818ee6135697b6897fe3ae2116a769a6d12249c52aaa81f12843e Dec 05 19:03:42 crc kubenswrapper[4828]: I1205 19:03:42.943284 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 19:03:42 crc kubenswrapper[4828]: W1205 19:03:42.952642 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-5820d76b68267820d1989d93d70c82add6813d545d7f46be294e64245018a795 WatchSource:0}: Error finding container 5820d76b68267820d1989d93d70c82add6813d545d7f46be294e64245018a795: Status 404 returned error can't find the container with id 5820d76b68267820d1989d93d70c82add6813d545d7f46be294e64245018a795 Dec 05 19:03:42 crc kubenswrapper[4828]: E1205 19:03:42.992999 4828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="800ms" Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.213093 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.215126 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.215172 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.215188 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.215220 4828 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 19:03:43 crc kubenswrapper[4828]: E1205 19:03:43.215696 4828 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.98:6443: connect: connection refused" node="crc" Dec 05 19:03:43 crc kubenswrapper[4828]: W1205 19:03:43.304990 4828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Dec 05 19:03:43 crc kubenswrapper[4828]: E1205 19:03:43.305075 4828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.382853 4828 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.451710 4828 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9" exitCode=0 Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.451850 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9"} Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.451987 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"36509f258f04e82f0cb4e3387d2a83aa3e82d05bb532b17d6a43a3a6122ed476"} Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.452150 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.453275 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.453347 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.453372 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.454019 4828 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb" exitCode=0 Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.454097 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb"} Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.454150 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bf27ef3ed7b74b712f1558aa9b79e5c442fb7501b9ec2632c4755a0478850ce8"} Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.454283 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.455659 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.455699 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.455715 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.456266 4828 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="5d7dbae45d5825575b4ee70c74e486cf3da64b0da3d60feb158c8ad3b77749f6" exitCode=0 Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.456367 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"5d7dbae45d5825575b4ee70c74e486cf3da64b0da3d60feb158c8ad3b77749f6"} Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.456399 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"737843a0ff49c2d208dd763462d43f1e85946ba82d0a35f8d452ca15a838d8bf"} Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.456525 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.457491 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.457525 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.457540 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.458446 4828 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="6cb8e008d76531413d4ec31bfdf79f0fb87a654388f5189ac4e10d3c48d4bdcf" exitCode=0 Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.458519 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"6cb8e008d76531413d4ec31bfdf79f0fb87a654388f5189ac4e10d3c48d4bdcf"} Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.458540 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5820d76b68267820d1989d93d70c82add6813d545d7f46be294e64245018a795"} Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.458620 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.459949 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.459981 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.459993 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.461272 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c"} Dec 05 19:03:43 crc kubenswrapper[4828]: I1205 19:03:43.461310 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6967ca5d056818ee6135697b6897fe3ae2116a769a6d12249c52aaa81f12843e"} Dec 05 19:03:43 crc kubenswrapper[4828]: W1205 19:03:43.607142 4828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Dec 05 19:03:43 crc kubenswrapper[4828]: E1205 19:03:43.607238 4828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Dec 05 19:03:43 crc kubenswrapper[4828]: W1205 19:03:43.701397 4828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Dec 05 19:03:43 crc kubenswrapper[4828]: E1205 19:03:43.701498 4828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Dec 05 19:03:43 crc kubenswrapper[4828]: W1205 19:03:43.707225 4828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Dec 05 19:03:43 crc kubenswrapper[4828]: E1205 19:03:43.707354 4828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Dec 05 19:03:43 crc kubenswrapper[4828]: E1205 19:03:43.794236 4828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="1.6s" Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.015858 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.017392 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.017432 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.017442 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.017471 4828 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 19:03:44 crc kubenswrapper[4828]: E1205 19:03:44.017990 4828 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.98:6443: connect: connection refused" node="crc" Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.465808 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a9c0dadb3b4f125469c4dec525da5f9054191054b32cc0bc7a5b71fad50a494b"} Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.465871 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6a7d2eb47db1c4257460e84470c6aa096d27899281a73bce5247c7c3b259c183"} Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.465887 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5b5897df04b9f5ae0fe2d732c74d60c0e3c1c1aecf6fd21dbb3b43dd0f374b3a"} Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.465991 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.466725 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.466759 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.466768 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.467809 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.467799 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a"} Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.467911 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4"} Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.467934 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a"} Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.468454 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.468476 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.468484 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.470286 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421"} Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.470310 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38"} Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.470320 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b"} Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.470329 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96"} Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.471497 4828 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac" exitCode=0 Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.471518 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac"} Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.471548 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.471624 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.472154 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.472191 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.472208 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.472227 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.472246 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.472256 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.898975 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 19:03:44 crc kubenswrapper[4828]: I1205 19:03:44.905239 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 19:03:45 crc kubenswrapper[4828]: I1205 19:03:45.476203 4828 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339" exitCode=0 Dec 05 19:03:45 crc kubenswrapper[4828]: I1205 19:03:45.476278 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339"} Dec 05 19:03:45 crc kubenswrapper[4828]: I1205 19:03:45.476394 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:45 crc kubenswrapper[4828]: I1205 19:03:45.477245 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:45 crc kubenswrapper[4828]: I1205 19:03:45.477272 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:45 crc kubenswrapper[4828]: I1205 19:03:45.477284 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:45 crc kubenswrapper[4828]: I1205 19:03:45.478756 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6fc52dbbcfbf16aff6f2984f391cddea4e4e04ce4b402a312f47cf5c0840f119"} Dec 05 19:03:45 crc kubenswrapper[4828]: I1205 19:03:45.478942 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:45 crc kubenswrapper[4828]: I1205 19:03:45.480010 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:45 crc kubenswrapper[4828]: I1205 19:03:45.480026 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:45 crc kubenswrapper[4828]: I1205 19:03:45.480035 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:45 crc kubenswrapper[4828]: I1205 19:03:45.481522 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:45 crc kubenswrapper[4828]: I1205 19:03:45.481577 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3"} Dec 05 19:03:45 crc kubenswrapper[4828]: I1205 19:03:45.481658 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:45 crc kubenswrapper[4828]: I1205 19:03:45.481654 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 19:03:45 crc kubenswrapper[4828]: I1205 19:03:45.483172 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:45 crc kubenswrapper[4828]: I1205 19:03:45.483201 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:45 crc kubenswrapper[4828]: I1205 19:03:45.483210 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:45 crc kubenswrapper[4828]: I1205 19:03:45.483508 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:45 crc kubenswrapper[4828]: I1205 19:03:45.483542 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:45 crc kubenswrapper[4828]: I1205 19:03:45.483555 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:45 crc kubenswrapper[4828]: I1205 19:03:45.618631 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:45 crc kubenswrapper[4828]: I1205 19:03:45.619705 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:45 crc kubenswrapper[4828]: I1205 19:03:45.619781 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:45 crc kubenswrapper[4828]: I1205 19:03:45.619801 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:45 crc kubenswrapper[4828]: I1205 19:03:45.619871 4828 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 19:03:46 crc kubenswrapper[4828]: I1205 19:03:46.491619 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e"} Dec 05 19:03:46 crc kubenswrapper[4828]: I1205 19:03:46.491683 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:46 crc kubenswrapper[4828]: I1205 19:03:46.491795 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:46 crc kubenswrapper[4828]: I1205 19:03:46.491689 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1"} Dec 05 19:03:46 crc kubenswrapper[4828]: I1205 19:03:46.491987 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2"} Dec 05 19:03:46 crc kubenswrapper[4828]: I1205 19:03:46.492018 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:03:46 crc kubenswrapper[4828]: I1205 19:03:46.492046 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490"} Dec 05 19:03:46 crc kubenswrapper[4828]: I1205 19:03:46.493145 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:46 crc kubenswrapper[4828]: I1205 19:03:46.493212 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:46 crc kubenswrapper[4828]: I1205 19:03:46.493230 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:46 crc kubenswrapper[4828]: I1205 19:03:46.493307 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:46 crc kubenswrapper[4828]: I1205 19:03:46.493339 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:46 crc kubenswrapper[4828]: I1205 19:03:46.493356 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:47 crc kubenswrapper[4828]: I1205 19:03:47.501147 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:47 crc kubenswrapper[4828]: I1205 19:03:47.501929 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:47 crc kubenswrapper[4828]: I1205 19:03:47.502070 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0"} Dec 05 19:03:47 crc kubenswrapper[4828]: I1205 19:03:47.502764 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:47 crc kubenswrapper[4828]: I1205 19:03:47.502811 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:47 crc kubenswrapper[4828]: I1205 19:03:47.502810 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:47 crc kubenswrapper[4828]: I1205 19:03:47.502879 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:47 crc kubenswrapper[4828]: I1205 19:03:47.502896 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:47 crc kubenswrapper[4828]: I1205 19:03:47.502849 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:47 crc kubenswrapper[4828]: I1205 19:03:47.814280 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 05 19:03:47 crc kubenswrapper[4828]: I1205 19:03:47.993593 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:03:48 crc kubenswrapper[4828]: I1205 19:03:48.033999 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 19:03:48 crc kubenswrapper[4828]: I1205 19:03:48.034225 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:48 crc kubenswrapper[4828]: I1205 19:03:48.035625 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:48 crc kubenswrapper[4828]: I1205 19:03:48.035653 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:48 crc kubenswrapper[4828]: I1205 19:03:48.035663 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:48 crc kubenswrapper[4828]: I1205 19:03:48.505282 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:48 crc kubenswrapper[4828]: I1205 19:03:48.505290 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:48 crc kubenswrapper[4828]: I1205 19:03:48.506557 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:48 crc kubenswrapper[4828]: I1205 19:03:48.506600 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:48 crc kubenswrapper[4828]: I1205 19:03:48.506617 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:48 crc kubenswrapper[4828]: I1205 19:03:48.507268 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:48 crc kubenswrapper[4828]: I1205 19:03:48.507456 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:48 crc kubenswrapper[4828]: I1205 19:03:48.507659 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:48 crc kubenswrapper[4828]: I1205 19:03:48.888467 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 19:03:48 crc kubenswrapper[4828]: I1205 19:03:48.888735 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:48 crc kubenswrapper[4828]: I1205 19:03:48.890963 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:48 crc kubenswrapper[4828]: I1205 19:03:48.891019 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:48 crc kubenswrapper[4828]: I1205 19:03:48.891042 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:49 crc kubenswrapper[4828]: I1205 19:03:49.508013 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:49 crc kubenswrapper[4828]: I1205 19:03:49.509020 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:49 crc kubenswrapper[4828]: I1205 19:03:49.509060 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:49 crc kubenswrapper[4828]: I1205 19:03:49.509075 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:49 crc kubenswrapper[4828]: I1205 19:03:49.923180 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:03:49 crc kubenswrapper[4828]: I1205 19:03:49.923421 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:49 crc kubenswrapper[4828]: I1205 19:03:49.925137 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:49 crc kubenswrapper[4828]: I1205 19:03:49.925189 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:49 crc kubenswrapper[4828]: I1205 19:03:49.925212 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:50 crc kubenswrapper[4828]: I1205 19:03:50.418426 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 19:03:50 crc kubenswrapper[4828]: I1205 19:03:50.418635 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:50 crc kubenswrapper[4828]: I1205 19:03:50.420290 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:50 crc kubenswrapper[4828]: I1205 19:03:50.420495 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:50 crc kubenswrapper[4828]: I1205 19:03:50.420644 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:51 crc kubenswrapper[4828]: I1205 19:03:51.889205 4828 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 19:03:51 crc kubenswrapper[4828]: I1205 19:03:51.889897 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 19:03:52 crc kubenswrapper[4828]: E1205 19:03:52.514207 4828 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 05 19:03:53 crc kubenswrapper[4828]: I1205 19:03:53.933168 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 05 19:03:53 crc kubenswrapper[4828]: I1205 19:03:53.934057 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:53 crc kubenswrapper[4828]: I1205 19:03:53.934976 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:53 crc kubenswrapper[4828]: I1205 19:03:53.934999 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:53 crc kubenswrapper[4828]: I1205 19:03:53.935009 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:54 crc kubenswrapper[4828]: I1205 19:03:54.078551 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 19:03:54 crc kubenswrapper[4828]: I1205 19:03:54.078695 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:54 crc kubenswrapper[4828]: I1205 19:03:54.079776 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:54 crc kubenswrapper[4828]: I1205 19:03:54.079807 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:54 crc kubenswrapper[4828]: I1205 19:03:54.079842 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:54 crc kubenswrapper[4828]: I1205 19:03:54.110315 4828 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 05 19:03:54 crc kubenswrapper[4828]: I1205 19:03:54.110364 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 05 19:03:54 crc kubenswrapper[4828]: I1205 19:03:54.383081 4828 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 05 19:03:55 crc kubenswrapper[4828]: I1205 19:03:55.088060 4828 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 05 19:03:55 crc kubenswrapper[4828]: I1205 19:03:55.088152 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 05 19:03:55 crc kubenswrapper[4828]: I1205 19:03:55.092862 4828 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 05 19:03:55 crc kubenswrapper[4828]: I1205 19:03:55.092932 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 05 19:03:59 crc kubenswrapper[4828]: I1205 19:03:59.930775 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:03:59 crc kubenswrapper[4828]: I1205 19:03:59.931075 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:03:59 crc kubenswrapper[4828]: I1205 19:03:59.932574 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:03:59 crc kubenswrapper[4828]: I1205 19:03:59.932647 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:03:59 crc kubenswrapper[4828]: I1205 19:03:59.932672 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:03:59 crc kubenswrapper[4828]: I1205 19:03:59.932883 4828 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 05 19:03:59 crc kubenswrapper[4828]: I1205 19:03:59.932941 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 05 19:03:59 crc kubenswrapper[4828]: I1205 19:03:59.938945 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:04:00 crc kubenswrapper[4828]: E1205 19:04:00.073936 4828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.076582 4828 trace.go:236] Trace[1387191192]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Dec-2025 19:03:45.927) (total time: 14149ms): Dec 05 19:04:00 crc kubenswrapper[4828]: Trace[1387191192]: ---"Objects listed" error: 14149ms (19:04:00.076) Dec 05 19:04:00 crc kubenswrapper[4828]: Trace[1387191192]: [14.149294038s] [14.149294038s] END Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.076630 4828 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.077494 4828 trace.go:236] Trace[255779662]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Dec-2025 19:03:45.758) (total time: 14318ms): Dec 05 19:04:00 crc kubenswrapper[4828]: Trace[255779662]: ---"Objects listed" error: 14318ms (19:04:00.077) Dec 05 19:04:00 crc kubenswrapper[4828]: Trace[255779662]: [14.318383654s] [14.318383654s] END Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.077544 4828 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.079241 4828 trace.go:236] Trace[1221789152]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Dec-2025 19:03:46.700) (total time: 13379ms): Dec 05 19:04:00 crc kubenswrapper[4828]: Trace[1221789152]: ---"Objects listed" error: 13378ms (19:04:00.079) Dec 05 19:04:00 crc kubenswrapper[4828]: Trace[1221789152]: [13.379005813s] [13.379005813s] END Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.079281 4828 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.079742 4828 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 05 19:04:00 crc kubenswrapper[4828]: E1205 19:04:00.079917 4828 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.081635 4828 trace.go:236] Trace[459908393]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Dec-2025 19:03:46.116) (total time: 13965ms): Dec 05 19:04:00 crc kubenswrapper[4828]: Trace[459908393]: ---"Objects listed" error: 13964ms (19:04:00.081) Dec 05 19:04:00 crc kubenswrapper[4828]: Trace[459908393]: [13.965129463s] [13.965129463s] END Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.081673 4828 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.384317 4828 apiserver.go:52] "Watching apiserver" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.386658 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.388345 4828 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.388652 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.389197 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.389240 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 19:04:00 crc kubenswrapper[4828]: E1205 19:04:00.389251 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.389310 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.389592 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.389648 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:00 crc kubenswrapper[4828]: E1205 19:04:00.389646 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:04:00 crc kubenswrapper[4828]: E1205 19:04:00.389679 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.389879 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.391733 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.391748 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.394082 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.394139 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.394208 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.394332 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.394400 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.394440 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.394521 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.394605 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.401964 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.424591 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.435636 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.446707 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.458331 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.464290 4828 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:45998->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.464391 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:45998->192.168.126.11:17697: read: connection reset by peer" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.472934 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.481638 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.481711 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.481741 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.481788 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.481813 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.483098 4828 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.486892 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.492635 4828 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.492912 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 19:04:00 crc kubenswrapper[4828]: E1205 19:04:00.498886 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 19:04:00 crc kubenswrapper[4828]: E1205 19:04:00.498930 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 19:04:00 crc kubenswrapper[4828]: E1205 19:04:00.498946 4828 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:04:00 crc kubenswrapper[4828]: E1205 19:04:00.499019 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 19:04:00.998995186 +0000 UTC m=+18.894217492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.504713 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.510028 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 19:04:00 crc kubenswrapper[4828]: E1205 19:04:00.511311 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 19:04:00 crc kubenswrapper[4828]: E1205 19:04:00.511395 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 19:04:00 crc kubenswrapper[4828]: E1205 19:04:00.511451 4828 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:04:00 crc kubenswrapper[4828]: E1205 19:04:00.511552 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 19:04:01.011534889 +0000 UTC m=+18.906757195 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.519153 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.528867 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.538002 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.540011 4828 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3" exitCode=255 Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.540217 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3"} Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.540939 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 19:04:00 crc kubenswrapper[4828]: E1205 19:04:00.546358 4828 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.547514 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.547708 4828 scope.go:117] "RemoveContainer" containerID="72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.551636 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.560151 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.569563 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.579575 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582038 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582074 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582093 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582112 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582149 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582167 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582186 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582204 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582218 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582237 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582252 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582266 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582281 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582297 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582314 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582364 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582383 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582405 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582428 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582447 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582489 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582506 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582521 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582559 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582583 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582600 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582614 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582644 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582689 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582703 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582718 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582732 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582746 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582760 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582775 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582785 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582788 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582789 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582844 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582859 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582875 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582890 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582932 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582947 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582963 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582978 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.582994 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583008 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583023 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583038 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583053 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583067 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583086 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583107 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583125 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583144 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583147 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583164 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583170 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583184 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583207 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583227 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583246 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583261 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583276 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583291 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583308 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583310 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583324 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583344 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583361 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583376 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583391 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583407 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583415 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583422 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583470 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583506 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583534 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583559 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583582 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583606 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583630 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583642 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583653 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583676 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583700 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583724 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583745 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583767 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583788 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583811 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583860 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583885 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583905 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583922 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583938 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583953 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583967 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583982 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583998 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584012 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584028 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584045 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584063 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584079 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584096 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584112 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584128 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584143 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584158 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584175 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584191 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584206 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584221 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584237 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584253 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584300 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584316 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584331 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584348 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584365 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584380 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584397 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584412 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584426 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584441 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584456 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584471 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584487 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584502 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584516 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584530 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584547 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584562 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584578 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584594 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584611 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584626 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584642 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584659 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584674 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584691 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584706 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584760 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584776 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584795 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584852 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584874 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584891 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584932 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584956 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584983 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584999 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585015 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585037 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585061 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585082 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585101 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585116 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585133 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585149 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585165 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585181 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585196 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585213 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585260 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585278 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585298 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585314 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585330 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585345 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585361 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585378 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585413 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585441 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585466 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585490 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585513 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585538 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585565 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585582 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583833 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583847 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583979 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.583990 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584147 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584268 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584396 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584548 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584713 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584763 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.590865 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584906 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.584909 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585055 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585062 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585198 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585341 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585495 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585485 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585598 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: E1205 19:04:00.585609 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:04:01.085587793 +0000 UTC m=+18.980810109 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.591042 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.586525 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.586775 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.586959 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.587023 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.587038 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.587141 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.587378 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.587455 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.587850 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.588041 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.588278 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.588512 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.588428 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.588714 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.589079 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.592034 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.589327 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.589600 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.589734 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.589450 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.590327 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.590409 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.590607 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.590715 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.591432 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.585404 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.591519 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.591602 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.591644 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.590969 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.591623 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.591688 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.591729 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.591852 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.592091 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.592428 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.592482 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.592525 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.592874 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.593078 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.593263 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.593306 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.593403 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.593427 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.593549 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.593891 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.593925 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.593972 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.594071 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.594341 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.594504 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.594481 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.594566 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.594611 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.591094 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.595100 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.595170 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.595652 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.595734 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.595799 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.596009 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.596079 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.596153 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.596216 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.596307 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.596374 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.596438 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.596500 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.596566 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.596634 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.596706 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.596813 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.596901 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.596985 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.597086 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.597149 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.597230 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.597307 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.597382 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.597451 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.597552 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.597630 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.597703 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.597772 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.597868 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.597952 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.598063 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.598140 4828 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.598197 4828 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.598250 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.598302 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.598353 4828 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.598404 4828 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.598471 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.598536 4828 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.598599 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.598651 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.598703 4828 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.598764 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.598838 4828 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.598895 4828 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.598959 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.599015 4828 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.599108 4828 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.599169 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.599223 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.599285 4828 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.599341 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.599394 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.599447 4828 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.599505 4828 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.599557 4828 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.599609 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.599664 4828 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.599733 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.599808 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.599885 4828 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.599949 4828 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.600013 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.600075 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.600137 4828 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.600194 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.600247 4828 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.600299 4828 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.600350 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.600411 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.600464 4828 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.600520 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.600572 4828 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.600662 4828 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.600746 4828 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.600803 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.600882 4828 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.600947 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.601006 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.601098 4828 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.601153 4828 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.601205 4828 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.601271 4828 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.601327 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.601379 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.601430 4828 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.601480 4828 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.601593 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.601651 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.601707 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.601759 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.601812 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.601970 4828 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.602031 4828 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.602083 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.602133 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.602185 4828 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.602238 4828 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.602291 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.602348 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.602402 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.602455 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.603019 4828 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.603095 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.603164 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.603225 4828 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.603279 4828 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.603332 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.603384 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.603442 4828 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.603501 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.603554 4828 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.603605 4828 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.603486 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.603943 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.603132 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.602984 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.595257 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.595471 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.604212 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.595982 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.604240 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.596125 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.596186 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.596315 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.596428 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.596458 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.596411 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.596877 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.597046 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.597095 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.597167 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.597173 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.597379 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.597326 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.597525 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.597698 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.597709 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.597849 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.597873 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.597890 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.597812 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.598229 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.598463 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.598776 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.598816 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.598997 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.599086 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.599098 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.599103 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.599351 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.599389 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.599650 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.599651 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.599803 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.600617 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.600926 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.601184 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.601283 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.602069 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.602230 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.602376 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.602648 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.602675 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.602866 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.602993 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: E1205 19:04:00.603051 4828 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.603250 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: E1205 19:04:00.603483 4828 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.603605 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.604260 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.604462 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.604627 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: E1205 19:04:00.604655 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 19:04:01.104633594 +0000 UTC m=+18.999855990 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 19:04:00 crc kubenswrapper[4828]: E1205 19:04:00.604728 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 19:04:01.104679965 +0000 UTC m=+18.999902261 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.603989 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.611361 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.611556 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.611661 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.611702 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.612244 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.612286 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.612433 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.613221 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.613873 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.614165 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.614244 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.614381 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.614546 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.614708 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.615960 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.616126 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.616279 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.616311 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.616553 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.616688 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.616731 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.616922 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.617035 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.617356 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.617443 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.617642 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.617674 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.617959 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.618082 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.618539 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.620170 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.620183 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.620258 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.621649 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.622155 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.623924 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.625796 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.626675 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.626867 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.627118 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.627118 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.627181 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.627394 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.627440 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.627515 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.627641 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.627765 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.628244 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.625071 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.628351 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.629011 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.629106 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.629332 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.629452 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.629486 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.629559 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.630309 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.630444 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.639054 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.640804 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.648339 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.653076 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.655868 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.658885 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.699615 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705019 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705089 4828 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705113 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705122 4828 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705131 4828 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705141 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705149 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705158 4828 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705149 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705166 4828 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705207 4828 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705217 4828 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705225 4828 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705234 4828 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705242 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705250 4828 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705258 4828 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705267 4828 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705274 4828 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705282 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705290 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705299 4828 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705307 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705315 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705324 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705333 4828 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705342 4828 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705352 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705360 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705369 4828 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705379 4828 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705388 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705396 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705406 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705414 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705422 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705430 4828 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705439 4828 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705446 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705457 4828 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705465 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705473 4828 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705481 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705490 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705499 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705506 4828 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705514 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705522 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705529 4828 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705537 4828 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705545 4828 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705564 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705572 4828 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705580 4828 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705588 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705599 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705608 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705619 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705628 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705635 4828 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705643 4828 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705652 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705660 4828 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705667 4828 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705676 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705684 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705692 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705702 4828 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705710 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705719 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705728 4828 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705736 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705744 4828 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705752 4828 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705761 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705769 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705778 4828 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705787 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705796 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705804 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705813 4828 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705839 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705852 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705863 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705873 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705883 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705894 4828 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705904 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705915 4828 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705926 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705936 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705948 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705958 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705968 4828 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705978 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.705988 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.706000 4828 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.706011 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.706021 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.706031 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.706042 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.706063 4828 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.706075 4828 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.706084 4828 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.706092 4828 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.706100 4828 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.706108 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.706115 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.706123 4828 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.706131 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.706139 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.706155 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.706163 4828 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.706206 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 19:04:00 crc kubenswrapper[4828]: W1205 19:04:00.708949 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-cfaad0464b07935654444e32027edda3a4bfc67f3ac62b62460c8ea5cff7ee9c WatchSource:0}: Error finding container cfaad0464b07935654444e32027edda3a4bfc67f3ac62b62460c8ea5cff7ee9c: Status 404 returned error can't find the container with id cfaad0464b07935654444e32027edda3a4bfc67f3ac62b62460c8ea5cff7ee9c Dec 05 19:04:00 crc kubenswrapper[4828]: I1205 19:04:00.711944 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 19:04:00 crc kubenswrapper[4828]: W1205 19:04:00.727197 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-162809fe1bf4045d1c067ab2ec9c1dc75a5bb1cb1a29f6dba6f765d96018d5a2 WatchSource:0}: Error finding container 162809fe1bf4045d1c067ab2ec9c1dc75a5bb1cb1a29f6dba6f765d96018d5a2: Status 404 returned error can't find the container with id 162809fe1bf4045d1c067ab2ec9c1dc75a5bb1cb1a29f6dba6f765d96018d5a2 Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.009403 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:01 crc kubenswrapper[4828]: E1205 19:04:01.009594 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 19:04:01 crc kubenswrapper[4828]: E1205 19:04:01.009609 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 19:04:01 crc kubenswrapper[4828]: E1205 19:04:01.009620 4828 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:04:01 crc kubenswrapper[4828]: E1205 19:04:01.009676 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 19:04:02.009647957 +0000 UTC m=+19.904870263 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.109886 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.109960 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.109999 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.110024 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:01 crc kubenswrapper[4828]: E1205 19:04:01.110078 4828 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 19:04:01 crc kubenswrapper[4828]: E1205 19:04:01.110120 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 19:04:02.110107114 +0000 UTC m=+20.005329410 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 19:04:01 crc kubenswrapper[4828]: E1205 19:04:01.110406 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:04:02.110397912 +0000 UTC m=+20.005620218 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:04:01 crc kubenswrapper[4828]: E1205 19:04:01.110486 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 19:04:01 crc kubenswrapper[4828]: E1205 19:04:01.110503 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 19:04:01 crc kubenswrapper[4828]: E1205 19:04:01.110513 4828 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:04:01 crc kubenswrapper[4828]: E1205 19:04:01.110535 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 19:04:02.110529505 +0000 UTC m=+20.005751811 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:04:01 crc kubenswrapper[4828]: E1205 19:04:01.110586 4828 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 19:04:01 crc kubenswrapper[4828]: E1205 19:04:01.110606 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 19:04:02.110600737 +0000 UTC m=+20.005823043 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.528340 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-nm8v4"] Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.528690 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-nm8v4" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.530619 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.534555 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.535414 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.543581 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.545458 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519"} Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.545719 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.546784 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3"} Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.546817 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"162809fe1bf4045d1c067ab2ec9c1dc75a5bb1cb1a29f6dba6f765d96018d5a2"} Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.547550 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"e8404a168211c0fd0f531077ee6e287fefa955fcfbf8230603ede3bd1947d511"} Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.549019 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89"} Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.549073 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034"} Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.549092 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"cfaad0464b07935654444e32027edda3a4bfc67f3ac62b62460c8ea5cff7ee9c"} Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.554658 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.567537 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.582764 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.594187 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.604503 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.613955 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftrdf\" (UniqueName: \"kubernetes.io/projected/ca9d9c5b-3bb6-4341-a670-8dec89ab476e-kube-api-access-ftrdf\") pod \"node-resolver-nm8v4\" (UID: \"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\") " pod="openshift-dns/node-resolver-nm8v4" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.614010 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ca9d9c5b-3bb6-4341-a670-8dec89ab476e-hosts-file\") pod \"node-resolver-nm8v4\" (UID: \"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\") " pod="openshift-dns/node-resolver-nm8v4" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.615073 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.626741 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-phlsx"] Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.627035 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-phlsx" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.635994 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.636250 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.636493 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.636992 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.641956 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.664896 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.691972 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.705314 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.714975 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ca9d9c5b-3bb6-4341-a670-8dec89ab476e-hosts-file\") pod \"node-resolver-nm8v4\" (UID: \"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\") " pod="openshift-dns/node-resolver-nm8v4" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.715012 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a660bd9-b4aa-4858-89e9-52a3782162d2-host\") pod \"node-ca-phlsx\" (UID: \"5a660bd9-b4aa-4858-89e9-52a3782162d2\") " pod="openshift-image-registry/node-ca-phlsx" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.715029 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5a660bd9-b4aa-4858-89e9-52a3782162d2-serviceca\") pod \"node-ca-phlsx\" (UID: \"5a660bd9-b4aa-4858-89e9-52a3782162d2\") " pod="openshift-image-registry/node-ca-phlsx" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.715057 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgm84\" (UniqueName: \"kubernetes.io/projected/5a660bd9-b4aa-4858-89e9-52a3782162d2-kube-api-access-lgm84\") pod \"node-ca-phlsx\" (UID: \"5a660bd9-b4aa-4858-89e9-52a3782162d2\") " pod="openshift-image-registry/node-ca-phlsx" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.715082 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftrdf\" (UniqueName: \"kubernetes.io/projected/ca9d9c5b-3bb6-4341-a670-8dec89ab476e-kube-api-access-ftrdf\") pod \"node-resolver-nm8v4\" (UID: \"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\") " pod="openshift-dns/node-resolver-nm8v4" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.715128 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ca9d9c5b-3bb6-4341-a670-8dec89ab476e-hosts-file\") pod \"node-resolver-nm8v4\" (UID: \"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\") " pod="openshift-dns/node-resolver-nm8v4" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.722078 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.733082 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftrdf\" (UniqueName: \"kubernetes.io/projected/ca9d9c5b-3bb6-4341-a670-8dec89ab476e-kube-api-access-ftrdf\") pod \"node-resolver-nm8v4\" (UID: \"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\") " pod="openshift-dns/node-resolver-nm8v4" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.745647 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.761292 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.772774 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.782235 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.798572 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.811285 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.816329 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgm84\" (UniqueName: \"kubernetes.io/projected/5a660bd9-b4aa-4858-89e9-52a3782162d2-kube-api-access-lgm84\") pod \"node-ca-phlsx\" (UID: \"5a660bd9-b4aa-4858-89e9-52a3782162d2\") " pod="openshift-image-registry/node-ca-phlsx" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.816374 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a660bd9-b4aa-4858-89e9-52a3782162d2-host\") pod \"node-ca-phlsx\" (UID: \"5a660bd9-b4aa-4858-89e9-52a3782162d2\") " pod="openshift-image-registry/node-ca-phlsx" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.816392 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5a660bd9-b4aa-4858-89e9-52a3782162d2-serviceca\") pod \"node-ca-phlsx\" (UID: \"5a660bd9-b4aa-4858-89e9-52a3782162d2\") " pod="openshift-image-registry/node-ca-phlsx" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.816560 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a660bd9-b4aa-4858-89e9-52a3782162d2-host\") pod \"node-ca-phlsx\" (UID: \"5a660bd9-b4aa-4858-89e9-52a3782162d2\") " pod="openshift-image-registry/node-ca-phlsx" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.817182 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5a660bd9-b4aa-4858-89e9-52a3782162d2-serviceca\") pod \"node-ca-phlsx\" (UID: \"5a660bd9-b4aa-4858-89e9-52a3782162d2\") " pod="openshift-image-registry/node-ca-phlsx" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.834222 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgm84\" (UniqueName: \"kubernetes.io/projected/5a660bd9-b4aa-4858-89e9-52a3782162d2-kube-api-access-lgm84\") pod \"node-ca-phlsx\" (UID: \"5a660bd9-b4aa-4858-89e9-52a3782162d2\") " pod="openshift-image-registry/node-ca-phlsx" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.835209 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.841186 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-nm8v4" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.849524 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:01 crc kubenswrapper[4828]: I1205 19:04:01.937496 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-phlsx" Dec 05 19:04:01 crc kubenswrapper[4828]: W1205 19:04:01.949193 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a660bd9_b4aa_4858_89e9_52a3782162d2.slice/crio-c67b64c67a87491bf32dabcd665b25dc7997d37e4208ab1402e6732a4c988f8d WatchSource:0}: Error finding container c67b64c67a87491bf32dabcd665b25dc7997d37e4208ab1402e6732a4c988f8d: Status 404 returned error can't find the container with id c67b64c67a87491bf32dabcd665b25dc7997d37e4208ab1402e6732a4c988f8d Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.017752 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:02 crc kubenswrapper[4828]: E1205 19:04:02.017900 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 19:04:02 crc kubenswrapper[4828]: E1205 19:04:02.017915 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 19:04:02 crc kubenswrapper[4828]: E1205 19:04:02.017926 4828 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:04:02 crc kubenswrapper[4828]: E1205 19:04:02.017973 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 19:04:04.017960044 +0000 UTC m=+21.913182350 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.054363 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-nlqsv"] Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.054788 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.059943 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.060198 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.060421 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.065786 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.065879 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.089584 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.117839 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.118197 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.118277 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:02 crc kubenswrapper[4828]: E1205 19:04:02.118349 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:04:04.118328939 +0000 UTC m=+22.013551255 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:04:02 crc kubenswrapper[4828]: E1205 19:04:02.118354 4828 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.118394 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a74199c1-79be-49b4-9c04-fdb48847c85e-proxy-tls\") pod \"machine-config-daemon-nlqsv\" (UID: \"a74199c1-79be-49b4-9c04-fdb48847c85e\") " pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" Dec 05 19:04:02 crc kubenswrapper[4828]: E1205 19:04:02.118412 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 19:04:04.118403381 +0000 UTC m=+22.013625687 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.118441 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a74199c1-79be-49b4-9c04-fdb48847c85e-rootfs\") pod \"machine-config-daemon-nlqsv\" (UID: \"a74199c1-79be-49b4-9c04-fdb48847c85e\") " pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.118462 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a74199c1-79be-49b4-9c04-fdb48847c85e-mcd-auth-proxy-config\") pod \"machine-config-daemon-nlqsv\" (UID: \"a74199c1-79be-49b4-9c04-fdb48847c85e\") " pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.118496 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.118519 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.118545 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65pzc\" (UniqueName: \"kubernetes.io/projected/a74199c1-79be-49b4-9c04-fdb48847c85e-kube-api-access-65pzc\") pod \"machine-config-daemon-nlqsv\" (UID: \"a74199c1-79be-49b4-9c04-fdb48847c85e\") " pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" Dec 05 19:04:02 crc kubenswrapper[4828]: E1205 19:04:02.118659 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 19:04:02 crc kubenswrapper[4828]: E1205 19:04:02.118683 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 19:04:02 crc kubenswrapper[4828]: E1205 19:04:02.118696 4828 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:04:02 crc kubenswrapper[4828]: E1205 19:04:02.118728 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 19:04:04.118718119 +0000 UTC m=+22.013940425 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:04:02 crc kubenswrapper[4828]: E1205 19:04:02.118777 4828 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 19:04:02 crc kubenswrapper[4828]: E1205 19:04:02.118851 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 19:04:04.118834573 +0000 UTC m=+22.014056869 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.141030 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.156036 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.174362 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.187892 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.208358 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.219066 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65pzc\" (UniqueName: \"kubernetes.io/projected/a74199c1-79be-49b4-9c04-fdb48847c85e-kube-api-access-65pzc\") pod \"machine-config-daemon-nlqsv\" (UID: \"a74199c1-79be-49b4-9c04-fdb48847c85e\") " pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.219108 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a74199c1-79be-49b4-9c04-fdb48847c85e-proxy-tls\") pod \"machine-config-daemon-nlqsv\" (UID: \"a74199c1-79be-49b4-9c04-fdb48847c85e\") " pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.219138 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a74199c1-79be-49b4-9c04-fdb48847c85e-rootfs\") pod \"machine-config-daemon-nlqsv\" (UID: \"a74199c1-79be-49b4-9c04-fdb48847c85e\") " pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.219155 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a74199c1-79be-49b4-9c04-fdb48847c85e-mcd-auth-proxy-config\") pod \"machine-config-daemon-nlqsv\" (UID: \"a74199c1-79be-49b4-9c04-fdb48847c85e\") " pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.219249 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a74199c1-79be-49b4-9c04-fdb48847c85e-rootfs\") pod \"machine-config-daemon-nlqsv\" (UID: \"a74199c1-79be-49b4-9c04-fdb48847c85e\") " pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.219799 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a74199c1-79be-49b4-9c04-fdb48847c85e-mcd-auth-proxy-config\") pod \"machine-config-daemon-nlqsv\" (UID: \"a74199c1-79be-49b4-9c04-fdb48847c85e\") " pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.223306 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.223976 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a74199c1-79be-49b4-9c04-fdb48847c85e-proxy-tls\") pod \"machine-config-daemon-nlqsv\" (UID: \"a74199c1-79be-49b4-9c04-fdb48847c85e\") " pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.238236 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65pzc\" (UniqueName: \"kubernetes.io/projected/a74199c1-79be-49b4-9c04-fdb48847c85e-kube-api-access-65pzc\") pod \"machine-config-daemon-nlqsv\" (UID: \"a74199c1-79be-49b4-9c04-fdb48847c85e\") " pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.239185 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.258557 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.275212 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.371130 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.445643 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.445664 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.445783 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:02 crc kubenswrapper[4828]: E1205 19:04:02.446052 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:04:02 crc kubenswrapper[4828]: E1205 19:04:02.446140 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:04:02 crc kubenswrapper[4828]: E1205 19:04:02.446304 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.449463 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.450032 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.451086 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.451877 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.452524 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.453174 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.453849 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.454541 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.459111 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.459895 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.462472 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.463307 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.464070 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.464942 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.465440 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.466326 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.466848 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.467411 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.468165 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.468775 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.470093 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.471141 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.471876 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.472911 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.473742 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.476221 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.480293 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.481110 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.481632 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.482195 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.482713 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.483186 4828 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.483285 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.486089 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.487077 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.495197 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.497887 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.502044 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.508113 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.508636 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.511797 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.513061 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.514528 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.517567 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.518666 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.519373 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.520346 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.521019 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.522235 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.523103 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.524262 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.524885 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.525430 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.526509 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.527133 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.528033 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.528441 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tzshq"] Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.529780 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-ksv4w"] Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.530171 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-lnk88"] Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.530691 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lnk88" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.530690 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.530746 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.530997 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.534319 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.534352 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.534363 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.534419 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.534436 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.534492 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.534596 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.534742 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.534645 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.534921 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.534653 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.535025 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.534890 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.540411 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.552407 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerStarted","Data":"01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e"} Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.552460 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerStarted","Data":"23d53e09679f5042d8fbee337ff3478ff464ae9edc1413b68c8d4abc3b506f8e"} Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.553280 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-phlsx" event={"ID":"5a660bd9-b4aa-4858-89e9-52a3782162d2","Type":"ContainerStarted","Data":"4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c"} Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.553304 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-phlsx" event={"ID":"5a660bd9-b4aa-4858-89e9-52a3782162d2","Type":"ContainerStarted","Data":"c67b64c67a87491bf32dabcd665b25dc7997d37e4208ab1402e6732a4c988f8d"} Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.554878 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-nm8v4" event={"ID":"ca9d9c5b-3bb6-4341-a670-8dec89ab476e","Type":"ContainerStarted","Data":"02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba"} Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.554905 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-nm8v4" event={"ID":"ca9d9c5b-3bb6-4341-a670-8dec89ab476e","Type":"ContainerStarted","Data":"eee4c9908db666b38ba8a705aafa5f6d5bbb5f017bc249128e96a470cbc8ff75"} Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.559481 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.575060 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.587062 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.602914 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.621386 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.622673 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-run-ovn-kubernetes\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.622710 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1be569ff-0725-412f-ac1a-da4f5077bc17-ovn-node-metrics-cert\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.622727 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-host-var-lib-cni-multus\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.622756 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-multus-conf-dir\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.622784 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-slash\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.622809 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1be569ff-0725-412f-ac1a-da4f5077bc17-env-overrides\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.622861 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-host-var-lib-cni-bin\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.622881 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3ae83517-5582-40f0-8f8c-f61e17a0b812-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lnk88\" (UID: \"3ae83517-5582-40f0-8f8c-f61e17a0b812\") " pod="openshift-multus/multus-additional-cni-plugins-lnk88" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.622899 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-run-openvswitch\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.622936 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcjjh\" (UniqueName: \"kubernetes.io/projected/e927a669-7d9d-442a-b020-339804e95af2-kube-api-access-rcjjh\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.622960 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-cni-netd\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.622975 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-multus-socket-dir-parent\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623004 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-host-run-k8s-cni-cncf-io\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623020 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-host-run-multus-certs\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623037 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3ae83517-5582-40f0-8f8c-f61e17a0b812-cni-binary-copy\") pod \"multus-additional-cni-plugins-lnk88\" (UID: \"3ae83517-5582-40f0-8f8c-f61e17a0b812\") " pod="openshift-multus/multus-additional-cni-plugins-lnk88" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623077 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1be569ff-0725-412f-ac1a-da4f5077bc17-ovnkube-config\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623093 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v24dk\" (UniqueName: \"kubernetes.io/projected/1be569ff-0725-412f-ac1a-da4f5077bc17-kube-api-access-v24dk\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623109 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-hostroot\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623132 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-run-systemd\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623165 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-log-socket\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623182 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-host-run-netns\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623197 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3ae83517-5582-40f0-8f8c-f61e17a0b812-cnibin\") pod \"multus-additional-cni-plugins-lnk88\" (UID: \"3ae83517-5582-40f0-8f8c-f61e17a0b812\") " pod="openshift-multus/multus-additional-cni-plugins-lnk88" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623242 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-host-var-lib-kubelet\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623259 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-run-netns\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623274 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623291 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1be569ff-0725-412f-ac1a-da4f5077bc17-ovnkube-script-lib\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623343 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-cnibin\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623357 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-etc-kubernetes\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623388 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-run-ovn\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623403 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-multus-cni-dir\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623417 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e927a669-7d9d-442a-b020-339804e95af2-cni-binary-copy\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623432 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-var-lib-openvswitch\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623446 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3ae83517-5582-40f0-8f8c-f61e17a0b812-system-cni-dir\") pod \"multus-additional-cni-plugins-lnk88\" (UID: \"3ae83517-5582-40f0-8f8c-f61e17a0b812\") " pod="openshift-multus/multus-additional-cni-plugins-lnk88" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623475 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-etc-openvswitch\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623513 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-node-log\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623543 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e927a669-7d9d-442a-b020-339804e95af2-multus-daemon-config\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623558 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-cni-bin\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623574 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nlt9\" (UniqueName: \"kubernetes.io/projected/3ae83517-5582-40f0-8f8c-f61e17a0b812-kube-api-access-8nlt9\") pod \"multus-additional-cni-plugins-lnk88\" (UID: \"3ae83517-5582-40f0-8f8c-f61e17a0b812\") " pod="openshift-multus/multus-additional-cni-plugins-lnk88" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623600 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-kubelet\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623628 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-os-release\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623642 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3ae83517-5582-40f0-8f8c-f61e17a0b812-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lnk88\" (UID: \"3ae83517-5582-40f0-8f8c-f61e17a0b812\") " pod="openshift-multus/multus-additional-cni-plugins-lnk88" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623657 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3ae83517-5582-40f0-8f8c-f61e17a0b812-os-release\") pod \"multus-additional-cni-plugins-lnk88\" (UID: \"3ae83517-5582-40f0-8f8c-f61e17a0b812\") " pod="openshift-multus/multus-additional-cni-plugins-lnk88" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623678 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-systemd-units\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.623709 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-system-cni-dir\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.639237 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.653123 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.665552 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.678671 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.691380 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.704456 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.718007 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.724302 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-cnibin\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.724477 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-etc-kubernetes\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.724559 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-etc-kubernetes\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.724567 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-multus-cni-dir\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.724370 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-cnibin\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.724645 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e927a669-7d9d-442a-b020-339804e95af2-cni-binary-copy\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.724666 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-run-ovn\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.724710 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-var-lib-openvswitch\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.724726 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3ae83517-5582-40f0-8f8c-f61e17a0b812-system-cni-dir\") pod \"multus-additional-cni-plugins-lnk88\" (UID: \"3ae83517-5582-40f0-8f8c-f61e17a0b812\") " pod="openshift-multus/multus-additional-cni-plugins-lnk88" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.724742 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-etc-openvswitch\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.724785 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-var-lib-openvswitch\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.724811 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-run-ovn\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.724862 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-node-log\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.724896 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3ae83517-5582-40f0-8f8c-f61e17a0b812-system-cni-dir\") pod \"multus-additional-cni-plugins-lnk88\" (UID: \"3ae83517-5582-40f0-8f8c-f61e17a0b812\") " pod="openshift-multus/multus-additional-cni-plugins-lnk88" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.724946 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-node-log\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.724925 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-etc-openvswitch\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.724978 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e927a669-7d9d-442a-b020-339804e95af2-multus-daemon-config\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.725002 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-cni-bin\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.725131 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-cni-bin\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.725202 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-multus-cni-dir\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.725492 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e927a669-7d9d-442a-b020-339804e95af2-cni-binary-copy\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.725512 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e927a669-7d9d-442a-b020-339804e95af2-multus-daemon-config\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.725545 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nlt9\" (UniqueName: \"kubernetes.io/projected/3ae83517-5582-40f0-8f8c-f61e17a0b812-kube-api-access-8nlt9\") pod \"multus-additional-cni-plugins-lnk88\" (UID: \"3ae83517-5582-40f0-8f8c-f61e17a0b812\") " pod="openshift-multus/multus-additional-cni-plugins-lnk88" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.725567 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-os-release\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.725619 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3ae83517-5582-40f0-8f8c-f61e17a0b812-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lnk88\" (UID: \"3ae83517-5582-40f0-8f8c-f61e17a0b812\") " pod="openshift-multus/multus-additional-cni-plugins-lnk88" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.725890 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-os-release\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.725956 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-kubelet\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.725973 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3ae83517-5582-40f0-8f8c-f61e17a0b812-os-release\") pod \"multus-additional-cni-plugins-lnk88\" (UID: \"3ae83517-5582-40f0-8f8c-f61e17a0b812\") " pod="openshift-multus/multus-additional-cni-plugins-lnk88" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.726027 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-kubelet\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.726058 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-systemd-units\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.726122 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3ae83517-5582-40f0-8f8c-f61e17a0b812-os-release\") pod \"multus-additional-cni-plugins-lnk88\" (UID: \"3ae83517-5582-40f0-8f8c-f61e17a0b812\") " pod="openshift-multus/multus-additional-cni-plugins-lnk88" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.726166 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-systemd-units\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.726198 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-system-cni-dir\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.726221 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1be569ff-0725-412f-ac1a-da4f5077bc17-ovn-node-metrics-cert\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.726570 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-host-var-lib-cni-multus\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.726597 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-multus-conf-dir\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.726641 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-run-ovn-kubernetes\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.726661 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-slash\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.726734 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1be569ff-0725-412f-ac1a-da4f5077bc17-env-overrides\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.726763 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-host-var-lib-cni-bin\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.726785 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3ae83517-5582-40f0-8f8c-f61e17a0b812-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lnk88\" (UID: \"3ae83517-5582-40f0-8f8c-f61e17a0b812\") " pod="openshift-multus/multus-additional-cni-plugins-lnk88" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.726853 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcjjh\" (UniqueName: \"kubernetes.io/projected/e927a669-7d9d-442a-b020-339804e95af2-kube-api-access-rcjjh\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.726871 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-run-openvswitch\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.726903 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-cni-netd\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.726917 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-multus-socket-dir-parent\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.726930 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-host-run-k8s-cni-cncf-io\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.726947 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-host-run-multus-certs\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.726979 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3ae83517-5582-40f0-8f8c-f61e17a0b812-cni-binary-copy\") pod \"multus-additional-cni-plugins-lnk88\" (UID: \"3ae83517-5582-40f0-8f8c-f61e17a0b812\") " pod="openshift-multus/multus-additional-cni-plugins-lnk88" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.726996 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v24dk\" (UniqueName: \"kubernetes.io/projected/1be569ff-0725-412f-ac1a-da4f5077bc17-kube-api-access-v24dk\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.727011 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-hostroot\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.727024 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1be569ff-0725-412f-ac1a-da4f5077bc17-ovnkube-config\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.727054 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-host-run-netns\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.727068 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3ae83517-5582-40f0-8f8c-f61e17a0b812-cnibin\") pod \"multus-additional-cni-plugins-lnk88\" (UID: \"3ae83517-5582-40f0-8f8c-f61e17a0b812\") " pod="openshift-multus/multus-additional-cni-plugins-lnk88" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.727092 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-run-systemd\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.727105 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-log-socket\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.727136 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-host-var-lib-kubelet\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.727153 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-run-netns\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.727167 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.727182 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1be569ff-0725-412f-ac1a-da4f5077bc17-ovnkube-script-lib\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.727399 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-slash\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.727410 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-multus-socket-dir-parent\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.726383 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3ae83517-5582-40f0-8f8c-f61e17a0b812-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lnk88\" (UID: \"3ae83517-5582-40f0-8f8c-f61e17a0b812\") " pod="openshift-multus/multus-additional-cni-plugins-lnk88" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.727483 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-host-var-lib-cni-multus\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.727503 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-multus-conf-dir\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.727538 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-run-ovn-kubernetes\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.727571 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-cni-netd\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.727601 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-run-openvswitch\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.727672 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-host-var-lib-cni-bin\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.727709 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-host-run-netns\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.727735 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-host-run-k8s-cni-cncf-io\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.727768 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-host-run-multus-certs\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.726306 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-system-cni-dir\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.727936 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-hostroot\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.727950 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-log-socket\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.727874 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3ae83517-5582-40f0-8f8c-f61e17a0b812-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lnk88\" (UID: \"3ae83517-5582-40f0-8f8c-f61e17a0b812\") " pod="openshift-multus/multus-additional-cni-plugins-lnk88" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.728170 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1be569ff-0725-412f-ac1a-da4f5077bc17-ovnkube-script-lib\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.728194 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3ae83517-5582-40f0-8f8c-f61e17a0b812-cnibin\") pod \"multus-additional-cni-plugins-lnk88\" (UID: \"3ae83517-5582-40f0-8f8c-f61e17a0b812\") " pod="openshift-multus/multus-additional-cni-plugins-lnk88" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.728223 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-run-systemd\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.728224 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e927a669-7d9d-442a-b020-339804e95af2-host-var-lib-kubelet\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.728242 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-run-netns\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.728248 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.728403 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1be569ff-0725-412f-ac1a-da4f5077bc17-env-overrides\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.728525 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3ae83517-5582-40f0-8f8c-f61e17a0b812-cni-binary-copy\") pod \"multus-additional-cni-plugins-lnk88\" (UID: \"3ae83517-5582-40f0-8f8c-f61e17a0b812\") " pod="openshift-multus/multus-additional-cni-plugins-lnk88" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.728737 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1be569ff-0725-412f-ac1a-da4f5077bc17-ovnkube-config\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.735359 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1be569ff-0725-412f-ac1a-da4f5077bc17-ovn-node-metrics-cert\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.742318 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.757557 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcjjh\" (UniqueName: \"kubernetes.io/projected/e927a669-7d9d-442a-b020-339804e95af2-kube-api-access-rcjjh\") pod \"multus-ksv4w\" (UID: \"e927a669-7d9d-442a-b020-339804e95af2\") " pod="openshift-multus/multus-ksv4w" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.757559 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nlt9\" (UniqueName: \"kubernetes.io/projected/3ae83517-5582-40f0-8f8c-f61e17a0b812-kube-api-access-8nlt9\") pod \"multus-additional-cni-plugins-lnk88\" (UID: \"3ae83517-5582-40f0-8f8c-f61e17a0b812\") " pod="openshift-multus/multus-additional-cni-plugins-lnk88" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.757839 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v24dk\" (UniqueName: \"kubernetes.io/projected/1be569ff-0725-412f-ac1a-da4f5077bc17-kube-api-access-v24dk\") pod \"ovnkube-node-tzshq\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.766605 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.778853 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.791368 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.803414 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.812455 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.822391 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.834308 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.848088 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.860849 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.863883 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lnk88" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.874214 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:02 crc kubenswrapper[4828]: I1205 19:04:02.882943 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ksv4w" Dec 05 19:04:03 crc kubenswrapper[4828]: W1205 19:04:03.087602 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ae83517_5582_40f0_8f8c_f61e17a0b812.slice/crio-8a43b8518f643e153f9217b4a9c72518fa6d1cc450003fca04d310899156d55b WatchSource:0}: Error finding container 8a43b8518f643e153f9217b4a9c72518fa6d1cc450003fca04d310899156d55b: Status 404 returned error can't find the container with id 8a43b8518f643e153f9217b4a9c72518fa6d1cc450003fca04d310899156d55b Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.280758 4828 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.287278 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.287553 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.287561 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.287643 4828 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.295545 4828 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.295775 4828 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.297190 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.297231 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.297247 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.297268 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.297284 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:03Z","lastTransitionTime":"2025-12-05T19:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:03 crc kubenswrapper[4828]: E1205 19:04:03.326392 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:03Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.329947 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.329985 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.329996 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.330013 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.330024 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:03Z","lastTransitionTime":"2025-12-05T19:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:03 crc kubenswrapper[4828]: E1205 19:04:03.340474 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:03Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.344881 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.344932 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.344948 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.344971 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.344986 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:03Z","lastTransitionTime":"2025-12-05T19:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:03 crc kubenswrapper[4828]: E1205 19:04:03.359734 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:03Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.362985 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.363028 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.363052 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.363070 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.363080 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:03Z","lastTransitionTime":"2025-12-05T19:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:03 crc kubenswrapper[4828]: E1205 19:04:03.373962 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:03Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.376531 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.376582 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.376595 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.376612 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.376624 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:03Z","lastTransitionTime":"2025-12-05T19:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:03 crc kubenswrapper[4828]: E1205 19:04:03.389259 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:03Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:03 crc kubenswrapper[4828]: E1205 19:04:03.389410 4828 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.391254 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.391289 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.391302 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.391318 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.391329 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:03Z","lastTransitionTime":"2025-12-05T19:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.494091 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.494131 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.494142 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.494158 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.494170 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:03Z","lastTransitionTime":"2025-12-05T19:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.570670 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" event={"ID":"3ae83517-5582-40f0-8f8c-f61e17a0b812","Type":"ContainerStarted","Data":"8a43b8518f643e153f9217b4a9c72518fa6d1cc450003fca04d310899156d55b"} Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.571590 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" event={"ID":"1be569ff-0725-412f-ac1a-da4f5077bc17","Type":"ContainerStarted","Data":"6d3964cc67a3362ffbab0d3a0a1b8ab5c14cbbd8293031756ffc983961cc5b35"} Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.572582 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ksv4w" event={"ID":"e927a669-7d9d-442a-b020-339804e95af2","Type":"ContainerStarted","Data":"94c65e2f8119a86d34bf77100cf67209f3761f30c6a03ecd3c14e6a72838c082"} Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.573924 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705"} Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.576185 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerStarted","Data":"365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627"} Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.584643 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:03Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.596565 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:03Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.597119 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.597153 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.597162 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.597176 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.597189 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:03Z","lastTransitionTime":"2025-12-05T19:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.612114 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:03Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.624613 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:03Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.637882 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:03Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.652006 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:03Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.667471 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:03Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.681706 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:03Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.698101 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:03Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.699106 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.699133 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.699144 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.699160 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.699171 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:03Z","lastTransitionTime":"2025-12-05T19:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.717695 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:03Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.729725 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:03Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.742147 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:03Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.753773 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:03Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.774121 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:03Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.793175 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:03Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.801507 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.801542 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.801552 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.801566 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.801576 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:03Z","lastTransitionTime":"2025-12-05T19:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.805295 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:03Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.817477 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:03Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.830483 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:03Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.841746 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:03Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.853496 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:03Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.867189 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:03Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.880353 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:03Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.891835 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:03Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.904779 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.904816 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.904848 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.904865 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.904877 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:03Z","lastTransitionTime":"2025-12-05T19:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.907355 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:03Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:03 crc kubenswrapper[4828]: I1205 19:04:03.920422 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:03Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.038931 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:04 crc kubenswrapper[4828]: E1205 19:04:04.039091 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 19:04:04 crc kubenswrapper[4828]: E1205 19:04:04.039111 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 19:04:04 crc kubenswrapper[4828]: E1205 19:04:04.039125 4828 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:04:04 crc kubenswrapper[4828]: E1205 19:04:04.039179 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 19:04:08.039165055 +0000 UTC m=+25.934387361 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.139907 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.140017 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.140048 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.140094 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:04 crc kubenswrapper[4828]: E1205 19:04:04.140216 4828 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 19:04:04 crc kubenswrapper[4828]: E1205 19:04:04.140218 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:04:08.140186467 +0000 UTC m=+26.035408813 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:04:04 crc kubenswrapper[4828]: E1205 19:04:04.140304 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 19:04:08.14029472 +0000 UTC m=+26.035517026 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 19:04:04 crc kubenswrapper[4828]: E1205 19:04:04.140316 4828 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 19:04:04 crc kubenswrapper[4828]: E1205 19:04:04.140355 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 19:04:04 crc kubenswrapper[4828]: E1205 19:04:04.140403 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 19:04:04 crc kubenswrapper[4828]: E1205 19:04:04.140428 4828 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:04:04 crc kubenswrapper[4828]: E1205 19:04:04.140473 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 19:04:08.140442974 +0000 UTC m=+26.035665460 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 19:04:04 crc kubenswrapper[4828]: E1205 19:04:04.140511 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 19:04:08.140501355 +0000 UTC m=+26.035723891 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.445764 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.445875 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.445771 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:04 crc kubenswrapper[4828]: E1205 19:04:04.445948 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:04:04 crc kubenswrapper[4828]: E1205 19:04:04.446079 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:04:04 crc kubenswrapper[4828]: E1205 19:04:04.446215 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.590250 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.590305 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.590320 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.590371 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.590386 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:04Z","lastTransitionTime":"2025-12-05T19:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.596770 4828 generic.go:334] "Generic (PLEG): container finished" podID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerID="e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407" exitCode=0 Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.597607 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" event={"ID":"1be569ff-0725-412f-ac1a-da4f5077bc17","Type":"ContainerDied","Data":"e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407"} Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.601159 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ksv4w" event={"ID":"e927a669-7d9d-442a-b020-339804e95af2","Type":"ContainerStarted","Data":"a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864"} Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.603548 4828 generic.go:334] "Generic (PLEG): container finished" podID="3ae83517-5582-40f0-8f8c-f61e17a0b812" containerID="4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872" exitCode=0 Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.603661 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" event={"ID":"3ae83517-5582-40f0-8f8c-f61e17a0b812","Type":"ContainerDied","Data":"4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872"} Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.634807 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:04Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.658056 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.659586 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:04Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.671713 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.679372 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.690970 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:04Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.697116 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.697147 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.697156 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.697169 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.697180 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:04Z","lastTransitionTime":"2025-12-05T19:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.705728 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:04Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.722582 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:04Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.734421 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:04Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.748950 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:04Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.764550 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:04Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.779762 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:04Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.793722 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:04Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.799932 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.799969 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.799980 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.799994 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.800004 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:04Z","lastTransitionTime":"2025-12-05T19:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.804613 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:04Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.816356 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:04Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.829983 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:04Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.853407 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:04Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.893641 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:04Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.902405 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.902442 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.902452 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.902468 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.902478 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:04Z","lastTransitionTime":"2025-12-05T19:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.915982 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:04Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.935042 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:04Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:04 crc kubenswrapper[4828]: I1205 19:04:04.956716 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:04Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.005675 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.005718 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.005727 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.005743 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.005755 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:05Z","lastTransitionTime":"2025-12-05T19:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.108275 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.108317 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.108329 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.108462 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.108488 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:05Z","lastTransitionTime":"2025-12-05T19:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.211041 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.211073 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.211082 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.211122 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.211133 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:05Z","lastTransitionTime":"2025-12-05T19:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.313174 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.313210 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.313221 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.313236 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.313246 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:05Z","lastTransitionTime":"2025-12-05T19:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.415962 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.416002 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.416013 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.416029 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.416042 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:05Z","lastTransitionTime":"2025-12-05T19:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.518833 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.518872 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.518888 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.518902 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.518911 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:05Z","lastTransitionTime":"2025-12-05T19:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.608346 4828 generic.go:334] "Generic (PLEG): container finished" podID="3ae83517-5582-40f0-8f8c-f61e17a0b812" containerID="b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286" exitCode=0 Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.608459 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" event={"ID":"3ae83517-5582-40f0-8f8c-f61e17a0b812","Type":"ContainerDied","Data":"b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286"} Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.613405 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" event={"ID":"1be569ff-0725-412f-ac1a-da4f5077bc17","Type":"ContainerStarted","Data":"6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6"} Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.613454 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" event={"ID":"1be569ff-0725-412f-ac1a-da4f5077bc17","Type":"ContainerStarted","Data":"66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08"} Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.613474 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" event={"ID":"1be569ff-0725-412f-ac1a-da4f5077bc17","Type":"ContainerStarted","Data":"46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5"} Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.613491 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" event={"ID":"1be569ff-0725-412f-ac1a-da4f5077bc17","Type":"ContainerStarted","Data":"6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894"} Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.613508 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" event={"ID":"1be569ff-0725-412f-ac1a-da4f5077bc17","Type":"ContainerStarted","Data":"aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9"} Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.613524 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" event={"ID":"1be569ff-0725-412f-ac1a-da4f5077bc17","Type":"ContainerStarted","Data":"6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424"} Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.620492 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.620538 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.620553 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.620576 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.620593 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:05Z","lastTransitionTime":"2025-12-05T19:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.634191 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:05Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.655564 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:05Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.674907 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:05Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.685971 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:05Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.700668 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:05Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.719903 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:05Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.723044 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.723074 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.723085 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.723101 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.723113 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:05Z","lastTransitionTime":"2025-12-05T19:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.739202 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:05Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.756554 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:05Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.768830 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:05Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.783470 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:05Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.796310 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:05Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.808042 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:05Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.818980 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:05Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.825464 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.825630 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.825734 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.825866 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.825992 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:05Z","lastTransitionTime":"2025-12-05T19:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.832632 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:05Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.851242 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:05Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.928745 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.928814 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.928865 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.928890 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:05 crc kubenswrapper[4828]: I1205 19:04:05.928909 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:05Z","lastTransitionTime":"2025-12-05T19:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.031500 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.031610 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.031630 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.032107 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.032174 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:06Z","lastTransitionTime":"2025-12-05T19:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.134795 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.134852 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.134864 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.134900 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.134911 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:06Z","lastTransitionTime":"2025-12-05T19:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.237118 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.237168 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.237182 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.237199 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.237217 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:06Z","lastTransitionTime":"2025-12-05T19:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.339564 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.339602 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.339613 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.339629 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.339642 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:06Z","lastTransitionTime":"2025-12-05T19:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.442108 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.442163 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.442179 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.442200 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.442216 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:06Z","lastTransitionTime":"2025-12-05T19:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.446349 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.446410 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:06 crc kubenswrapper[4828]: E1205 19:04:06.446433 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.446454 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:06 crc kubenswrapper[4828]: E1205 19:04:06.446528 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:04:06 crc kubenswrapper[4828]: E1205 19:04:06.446647 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.544190 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.544244 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.544261 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.544281 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.544294 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:06Z","lastTransitionTime":"2025-12-05T19:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.619691 4828 generic.go:334] "Generic (PLEG): container finished" podID="3ae83517-5582-40f0-8f8c-f61e17a0b812" containerID="395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495" exitCode=0 Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.619765 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" event={"ID":"3ae83517-5582-40f0-8f8c-f61e17a0b812","Type":"ContainerDied","Data":"395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495"} Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.647396 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.647440 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.647448 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.647464 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.647474 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:06Z","lastTransitionTime":"2025-12-05T19:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.650753 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:06Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.666807 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:06Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.683412 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:06Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.699237 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:06Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.709867 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:06Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.720853 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:06Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.744912 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:06Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.750665 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.750736 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.750754 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.750777 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.750792 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:06Z","lastTransitionTime":"2025-12-05T19:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.760511 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:06Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.773477 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:06Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.810964 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:06Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.842103 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:06Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.852868 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.852905 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.852914 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.852929 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.852938 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:06Z","lastTransitionTime":"2025-12-05T19:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.857975 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:06Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.872704 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:06Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.885009 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:06Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.898989 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:06Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.955207 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.955254 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.955264 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.955280 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:06 crc kubenswrapper[4828]: I1205 19:04:06.955292 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:06Z","lastTransitionTime":"2025-12-05T19:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.058154 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.058223 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.058241 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.058262 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.058279 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:07Z","lastTransitionTime":"2025-12-05T19:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.161422 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.161519 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.161544 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.161575 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.161600 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:07Z","lastTransitionTime":"2025-12-05T19:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.264096 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.264151 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.264163 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.264178 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.264189 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:07Z","lastTransitionTime":"2025-12-05T19:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.366989 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.367040 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.367055 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.367081 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.367098 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:07Z","lastTransitionTime":"2025-12-05T19:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.471264 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.471310 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.471323 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.471343 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.471356 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:07Z","lastTransitionTime":"2025-12-05T19:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.573751 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.573782 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.573812 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.573848 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.573860 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:07Z","lastTransitionTime":"2025-12-05T19:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.626635 4828 generic.go:334] "Generic (PLEG): container finished" podID="3ae83517-5582-40f0-8f8c-f61e17a0b812" containerID="53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54" exitCode=0 Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.626722 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" event={"ID":"3ae83517-5582-40f0-8f8c-f61e17a0b812","Type":"ContainerDied","Data":"53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54"} Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.637720 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" event={"ID":"1be569ff-0725-412f-ac1a-da4f5077bc17","Type":"ContainerStarted","Data":"92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704"} Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.642557 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:07Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.659048 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:07Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.678791 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.678898 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.678926 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.678966 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.678990 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:07Z","lastTransitionTime":"2025-12-05T19:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.682550 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:07Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.702392 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:07Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.719174 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:07Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.734108 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:07Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.747849 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:07Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.759526 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:07Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.778786 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:07Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.781670 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.781711 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.781721 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.781737 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.781746 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:07Z","lastTransitionTime":"2025-12-05T19:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.800178 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:07Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.812982 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:07Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.827125 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:07Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.842431 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:07Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.855271 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:07Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.865668 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:07Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.884317 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.884362 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.884371 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.884390 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.884399 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:07Z","lastTransitionTime":"2025-12-05T19:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.990257 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.990393 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.990407 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.990421 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:07 crc kubenswrapper[4828]: I1205 19:04:07.990520 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:07Z","lastTransitionTime":"2025-12-05T19:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.083741 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:08 crc kubenswrapper[4828]: E1205 19:04:08.083951 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 19:04:08 crc kubenswrapper[4828]: E1205 19:04:08.083975 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 19:04:08 crc kubenswrapper[4828]: E1205 19:04:08.083989 4828 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:04:08 crc kubenswrapper[4828]: E1205 19:04:08.084038 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 19:04:16.084022173 +0000 UTC m=+33.979244479 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.093703 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.093738 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.093748 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.093763 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.093774 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:08Z","lastTransitionTime":"2025-12-05T19:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.185147 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.185261 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.185286 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:08 crc kubenswrapper[4828]: E1205 19:04:08.185361 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:04:16.185339413 +0000 UTC m=+34.080561719 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.185389 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:08 crc kubenswrapper[4828]: E1205 19:04:08.185435 4828 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 19:04:08 crc kubenswrapper[4828]: E1205 19:04:08.185486 4828 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 19:04:08 crc kubenswrapper[4828]: E1205 19:04:08.185518 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 19:04:08 crc kubenswrapper[4828]: E1205 19:04:08.185540 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 19:04:08 crc kubenswrapper[4828]: E1205 19:04:08.185551 4828 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:04:08 crc kubenswrapper[4828]: E1205 19:04:08.185520 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 19:04:16.185500198 +0000 UTC m=+34.080722504 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 19:04:08 crc kubenswrapper[4828]: E1205 19:04:08.185623 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 19:04:16.185611 +0000 UTC m=+34.080833306 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 19:04:08 crc kubenswrapper[4828]: E1205 19:04:08.185653 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 19:04:16.185629791 +0000 UTC m=+34.080852097 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.201140 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.201210 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.201226 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.201254 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.201271 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:08Z","lastTransitionTime":"2025-12-05T19:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.304765 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.304857 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.304868 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.304887 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.304899 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:08Z","lastTransitionTime":"2025-12-05T19:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.407390 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.407435 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.407445 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.407462 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.407473 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:08Z","lastTransitionTime":"2025-12-05T19:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.445856 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.445878 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.446043 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:08 crc kubenswrapper[4828]: E1205 19:04:08.446261 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:04:08 crc kubenswrapper[4828]: E1205 19:04:08.446409 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:04:08 crc kubenswrapper[4828]: E1205 19:04:08.446530 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.510114 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.510164 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.510178 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.510197 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.510211 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:08Z","lastTransitionTime":"2025-12-05T19:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.612947 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.612987 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.612996 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.613010 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.613019 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:08Z","lastTransitionTime":"2025-12-05T19:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.643375 4828 generic.go:334] "Generic (PLEG): container finished" podID="3ae83517-5582-40f0-8f8c-f61e17a0b812" containerID="52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07" exitCode=0 Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.643424 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" event={"ID":"3ae83517-5582-40f0-8f8c-f61e17a0b812","Type":"ContainerDied","Data":"52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07"} Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.665245 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:08Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.680000 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:08Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.695850 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:08Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.712876 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:08Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.717519 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.717549 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.717565 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.717580 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.717589 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:08Z","lastTransitionTime":"2025-12-05T19:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.732645 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:08Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.746274 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:08Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.762820 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:08Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.774913 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:08Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.789731 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:08Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.808806 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:08Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.820203 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.820245 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.820254 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.820270 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.820280 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:08Z","lastTransitionTime":"2025-12-05T19:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.821278 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:08Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.830218 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:08Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.838803 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:08Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.848094 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:08Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.858746 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:08Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.924089 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.924148 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.924156 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.924170 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:08 crc kubenswrapper[4828]: I1205 19:04:08.924179 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:08Z","lastTransitionTime":"2025-12-05T19:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.027264 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.027336 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.027399 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.027431 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.027455 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:09Z","lastTransitionTime":"2025-12-05T19:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.131723 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.131791 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.131812 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.131874 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.131901 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:09Z","lastTransitionTime":"2025-12-05T19:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.234782 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.234843 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.234861 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.234877 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.234888 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:09Z","lastTransitionTime":"2025-12-05T19:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.338241 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.338277 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.338286 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.338298 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.338307 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:09Z","lastTransitionTime":"2025-12-05T19:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.446388 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.447414 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.447444 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.447469 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.447530 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:09Z","lastTransitionTime":"2025-12-05T19:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.550046 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.550101 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.550119 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.550142 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.550160 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:09Z","lastTransitionTime":"2025-12-05T19:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.650958 4828 generic.go:334] "Generic (PLEG): container finished" podID="3ae83517-5582-40f0-8f8c-f61e17a0b812" containerID="4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1" exitCode=0 Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.651029 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" event={"ID":"3ae83517-5582-40f0-8f8c-f61e17a0b812","Type":"ContainerDied","Data":"4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1"} Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.652394 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.652476 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.652499 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.652529 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.652554 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:09Z","lastTransitionTime":"2025-12-05T19:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.669266 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:09Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.683950 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:09Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.699727 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:09Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.716771 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:09Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.750488 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:09Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.755232 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.755274 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.755287 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.755309 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.755323 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:09Z","lastTransitionTime":"2025-12-05T19:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.761318 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:09Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.773556 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:09Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.790418 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:09Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.803839 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:09Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.815216 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:09Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.827173 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:09Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.847753 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:09Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.857507 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.857541 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.857550 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.857563 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.857571 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:09Z","lastTransitionTime":"2025-12-05T19:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.860105 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:09Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.870759 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:09Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.881454 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:09Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.960129 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.960165 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.960176 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.960193 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:09 crc kubenswrapper[4828]: I1205 19:04:09.960204 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:09Z","lastTransitionTime":"2025-12-05T19:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.062180 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.062209 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.062219 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.062233 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.062243 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:10Z","lastTransitionTime":"2025-12-05T19:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.164078 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.164118 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.164130 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.164150 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.164162 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:10Z","lastTransitionTime":"2025-12-05T19:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.265882 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.265927 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.265937 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.265949 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.265958 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:10Z","lastTransitionTime":"2025-12-05T19:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.369036 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.369070 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.369080 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.369096 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.369107 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:10Z","lastTransitionTime":"2025-12-05T19:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.446171 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.446225 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.446181 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:10 crc kubenswrapper[4828]: E1205 19:04:10.446330 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:04:10 crc kubenswrapper[4828]: E1205 19:04:10.446419 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:04:10 crc kubenswrapper[4828]: E1205 19:04:10.446477 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.471544 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.471600 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.471612 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.471629 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.471641 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:10Z","lastTransitionTime":"2025-12-05T19:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.573480 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.573523 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.573535 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.573553 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.573566 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:10Z","lastTransitionTime":"2025-12-05T19:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.661778 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" event={"ID":"1be569ff-0725-412f-ac1a-da4f5077bc17","Type":"ContainerStarted","Data":"1ba523edfc6ffbd7899f2bfd11f18be350f3e8529d357520c9214630dd82c39f"} Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.662537 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.662559 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.662569 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.669028 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" event={"ID":"3ae83517-5582-40f0-8f8c-f61e17a0b812","Type":"ContainerStarted","Data":"c81d9475b5e3b88e21ba70262d2c74c28a1907cf0241e00ce4eb57a70385e706"} Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.676018 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.676043 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.676052 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.676064 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.676073 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:10Z","lastTransitionTime":"2025-12-05T19:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.676930 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:10Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.690453 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.693960 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.694408 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:10Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.706718 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:10Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.723303 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:10Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.739796 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba523edfc6ffbd7899f2bfd11f18be350f3e8529d357520c9214630dd82c39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:10Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.751358 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:10Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.762904 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:10Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.774170 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:10Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.777692 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.777718 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.777726 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.777739 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.777748 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:10Z","lastTransitionTime":"2025-12-05T19:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.783966 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:10Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.793395 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:10Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.811979 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:10Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.825946 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:10Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.840292 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:10Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.852813 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:10Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.865666 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:10Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.880110 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.880157 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.880170 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.880186 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.880197 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:10Z","lastTransitionTime":"2025-12-05T19:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.885362 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba523edfc6ffbd7899f2bfd11f18be350f3e8529d357520c9214630dd82c39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:10Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.895150 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:10Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.904989 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:10Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.915784 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:10Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.926667 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:10Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.938588 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:10Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.949303 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:10Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.962422 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:10Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.976270 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:10Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.984586 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.984854 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.984982 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.985072 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.985146 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:10Z","lastTransitionTime":"2025-12-05T19:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:10 crc kubenswrapper[4828]: I1205 19:04:10.996180 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:10Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.011422 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:11Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.023527 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:11Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.034798 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:11Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.049254 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c81d9475b5e3b88e21ba70262d2c74c28a1907cf0241e00ce4eb57a70385e706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:11Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.061590 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:11Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.087618 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.087648 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.087659 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.087673 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.087684 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:11Z","lastTransitionTime":"2025-12-05T19:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.190212 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.190296 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.190307 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.190321 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.190330 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:11Z","lastTransitionTime":"2025-12-05T19:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.293141 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.293190 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.293204 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.293222 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.293239 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:11Z","lastTransitionTime":"2025-12-05T19:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.397111 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.397181 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.397205 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.397234 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.397257 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:11Z","lastTransitionTime":"2025-12-05T19:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.499890 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.499955 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.499978 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.500003 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.500026 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:11Z","lastTransitionTime":"2025-12-05T19:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.603062 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.603340 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.603422 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.603523 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.603601 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:11Z","lastTransitionTime":"2025-12-05T19:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.706034 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.706061 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.706069 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.706084 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.706093 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:11Z","lastTransitionTime":"2025-12-05T19:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.808314 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.808358 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.808372 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.808386 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.808396 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:11Z","lastTransitionTime":"2025-12-05T19:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.911252 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.911492 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.911554 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.911619 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:11 crc kubenswrapper[4828]: I1205 19:04:11.911674 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:11Z","lastTransitionTime":"2025-12-05T19:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.014811 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.014865 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.014876 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.014895 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.014907 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:12Z","lastTransitionTime":"2025-12-05T19:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.117952 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.118258 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.118271 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.118288 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.118298 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:12Z","lastTransitionTime":"2025-12-05T19:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.220408 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.220454 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.220468 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.220485 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.220496 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:12Z","lastTransitionTime":"2025-12-05T19:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.323519 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.323564 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.323580 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.323602 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.323620 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:12Z","lastTransitionTime":"2025-12-05T19:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.426104 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.426162 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.426183 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.426214 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.426236 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:12Z","lastTransitionTime":"2025-12-05T19:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.447061 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.447113 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.447243 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:12 crc kubenswrapper[4828]: E1205 19:04:12.447724 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:04:12 crc kubenswrapper[4828]: E1205 19:04:12.447758 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:04:12 crc kubenswrapper[4828]: E1205 19:04:12.447605 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.471362 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.491789 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.512341 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.530575 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.530627 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.530640 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.530657 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.530670 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:12Z","lastTransitionTime":"2025-12-05T19:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.541135 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.559163 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.572686 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.589107 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.603029 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.618561 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c81d9475b5e3b88e21ba70262d2c74c28a1907cf0241e00ce4eb57a70385e706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.646957 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.646991 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.646999 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.647013 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.647024 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:12Z","lastTransitionTime":"2025-12-05T19:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.665830 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba523edfc6ffbd7899f2bfd11f18be350f3e8529d357520c9214630dd82c39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.679246 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tzshq_1be569ff-0725-412f-ac1a-da4f5077bc17/ovnkube-controller/0.log" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.681740 4828 generic.go:334] "Generic (PLEG): container finished" podID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerID="1ba523edfc6ffbd7899f2bfd11f18be350f3e8529d357520c9214630dd82c39f" exitCode=1 Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.681780 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" event={"ID":"1be569ff-0725-412f-ac1a-da4f5077bc17","Type":"ContainerDied","Data":"1ba523edfc6ffbd7899f2bfd11f18be350f3e8529d357520c9214630dd82c39f"} Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.682378 4828 scope.go:117] "RemoveContainer" containerID="1ba523edfc6ffbd7899f2bfd11f18be350f3e8529d357520c9214630dd82c39f" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.686011 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.698767 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.711665 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.722993 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.734219 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.743755 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.748882 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.748937 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.748948 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.748962 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.748972 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:12Z","lastTransitionTime":"2025-12-05T19:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.753894 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.763992 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.775341 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.787482 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.798957 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.811653 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.827109 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.848187 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.850465 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.850501 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.850519 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.850536 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.850546 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:12Z","lastTransitionTime":"2025-12-05T19:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.859977 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.870190 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.881006 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.894645 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c81d9475b5e3b88e21ba70262d2c74c28a1907cf0241e00ce4eb57a70385e706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.907569 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.924428 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba523edfc6ffbd7899f2bfd11f18be350f3e8529d357520c9214630dd82c39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ba523edfc6ffbd7899f2bfd11f18be350f3e8529d357520c9214630dd82c39f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:12Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1205 19:04:12.176435 6139 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 19:04:12.176483 6139 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1205 19:04:12.176561 6139 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 19:04:12.176595 6139 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 19:04:12.176601 6139 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 19:04:12.176617 6139 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1205 19:04:12.176630 6139 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1205 19:04:12.176665 6139 factory.go:656] Stopping watch factory\\\\nI1205 19:04:12.176689 6139 ovnkube.go:599] Stopped ovnkube\\\\nI1205 19:04:12.176721 6139 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1205 19:04:12.176743 6139 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1205 19:04:12.176751 6139 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1205 19:04:12.176760 6139 handler.go:208] Removed *v1.Node event handler 2\\\\nI1205 19:04:12.176767 6139 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 19:04:12.176775 6139 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1205 19\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.953317 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.953362 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.953370 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.953384 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:12 crc kubenswrapper[4828]: I1205 19:04:12.953394 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:12Z","lastTransitionTime":"2025-12-05T19:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.055632 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.055679 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.055691 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.055708 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.055719 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:13Z","lastTransitionTime":"2025-12-05T19:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.157905 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.157941 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.157952 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.157969 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.157982 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:13Z","lastTransitionTime":"2025-12-05T19:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.260378 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.260409 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.260417 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.260432 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.260439 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:13Z","lastTransitionTime":"2025-12-05T19:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.363136 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.363173 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.363182 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.363196 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.363206 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:13Z","lastTransitionTime":"2025-12-05T19:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.465561 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.465641 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.465681 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.465721 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.465746 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:13Z","lastTransitionTime":"2025-12-05T19:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.568433 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.568462 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.568471 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.568483 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.568492 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:13Z","lastTransitionTime":"2025-12-05T19:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.671660 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.671741 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.671764 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.671791 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.671810 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:13Z","lastTransitionTime":"2025-12-05T19:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.690525 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tzshq_1be569ff-0725-412f-ac1a-da4f5077bc17/ovnkube-controller/0.log" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.693350 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" event={"ID":"1be569ff-0725-412f-ac1a-da4f5077bc17","Type":"ContainerStarted","Data":"4ce2785deb49a5b09fd305b8c9c4aa4bc10baf68b1a3bc48fe2d07bc947b4771"} Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.693698 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.716525 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ce2785deb49a5b09fd305b8c9c4aa4bc10baf68b1a3bc48fe2d07bc947b4771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ba523edfc6ffbd7899f2bfd11f18be350f3e8529d357520c9214630dd82c39f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:12Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1205 19:04:12.176435 6139 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 19:04:12.176483 6139 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1205 19:04:12.176561 6139 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 19:04:12.176595 6139 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 19:04:12.176601 6139 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 19:04:12.176617 6139 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1205 19:04:12.176630 6139 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1205 19:04:12.176665 6139 factory.go:656] Stopping watch factory\\\\nI1205 19:04:12.176689 6139 ovnkube.go:599] Stopped ovnkube\\\\nI1205 19:04:12.176721 6139 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1205 19:04:12.176743 6139 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1205 19:04:12.176751 6139 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1205 19:04:12.176760 6139 handler.go:208] Removed *v1.Node event handler 2\\\\nI1205 19:04:12.176767 6139 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 19:04:12.176775 6139 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1205 19\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:13Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.729735 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:13Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.744534 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:13Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.759120 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:13Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.772850 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:13Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.774090 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.774119 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.774127 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.774141 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.774153 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:13Z","lastTransitionTime":"2025-12-05T19:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.789229 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.789266 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.789279 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.789316 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.789336 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:13Z","lastTransitionTime":"2025-12-05T19:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.799200 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:13Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:13 crc kubenswrapper[4828]: E1205 19:04:13.801016 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:13Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.804254 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.804290 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.804299 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.804311 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.804321 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:13Z","lastTransitionTime":"2025-12-05T19:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:13 crc kubenswrapper[4828]: E1205 19:04:13.815443 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:13Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.819996 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:13Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.822403 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.822446 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.822459 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.822477 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.822489 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:13Z","lastTransitionTime":"2025-12-05T19:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:13 crc kubenswrapper[4828]: E1205 19:04:13.834220 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:13Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.835927 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:13Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.838982 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.839043 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.839053 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.839068 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.839078 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:13Z","lastTransitionTime":"2025-12-05T19:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.848151 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:13Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:13 crc kubenswrapper[4828]: E1205 19:04:13.850366 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:13Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.853711 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.853757 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.853770 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.853788 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.853801 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:13Z","lastTransitionTime":"2025-12-05T19:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.862661 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:13Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:13 crc kubenswrapper[4828]: E1205 19:04:13.868113 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:13Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:13 crc kubenswrapper[4828]: E1205 19:04:13.868223 4828 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.875781 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:13Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.876857 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.876908 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.876924 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.876946 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.876961 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:13Z","lastTransitionTime":"2025-12-05T19:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.886519 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:13Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.895544 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:13Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.906321 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:13Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.918998 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c81d9475b5e3b88e21ba70262d2c74c28a1907cf0241e00ce4eb57a70385e706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:13Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.979300 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.979364 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.979383 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.979406 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:13 crc kubenswrapper[4828]: I1205 19:04:13.979424 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:13Z","lastTransitionTime":"2025-12-05T19:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.082170 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.082233 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.082251 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.082275 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.082294 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:14Z","lastTransitionTime":"2025-12-05T19:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.114740 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.131711 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:14Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.149633 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:14Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.169359 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:14Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.185574 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.185640 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.185660 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.185685 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.185704 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:14Z","lastTransitionTime":"2025-12-05T19:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.191064 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:14Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.210120 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:14Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.227198 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:14Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.245783 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:14Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.271208 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:14Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.287326 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:14Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.287719 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.287751 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.287761 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.287777 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.287788 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:14Z","lastTransitionTime":"2025-12-05T19:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.305933 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:14Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.319841 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:14Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.334807 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c81d9475b5e3b88e21ba70262d2c74c28a1907cf0241e00ce4eb57a70385e706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:14Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.348077 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:14Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.361688 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:14Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.378815 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ce2785deb49a5b09fd305b8c9c4aa4bc10baf68b1a3bc48fe2d07bc947b4771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ba523edfc6ffbd7899f2bfd11f18be350f3e8529d357520c9214630dd82c39f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:12Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1205 19:04:12.176435 6139 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 19:04:12.176483 6139 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1205 19:04:12.176561 6139 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 19:04:12.176595 6139 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 19:04:12.176601 6139 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 19:04:12.176617 6139 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1205 19:04:12.176630 6139 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1205 19:04:12.176665 6139 factory.go:656] Stopping watch factory\\\\nI1205 19:04:12.176689 6139 ovnkube.go:599] Stopped ovnkube\\\\nI1205 19:04:12.176721 6139 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1205 19:04:12.176743 6139 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1205 19:04:12.176751 6139 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1205 19:04:12.176760 6139 handler.go:208] Removed *v1.Node event handler 2\\\\nI1205 19:04:12.176767 6139 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 19:04:12.176775 6139 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1205 19\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:14Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.390507 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.390552 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.390560 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.390573 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.390583 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:14Z","lastTransitionTime":"2025-12-05T19:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.446261 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.446330 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:14 crc kubenswrapper[4828]: E1205 19:04:14.446392 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:04:14 crc kubenswrapper[4828]: E1205 19:04:14.446476 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.446520 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:14 crc kubenswrapper[4828]: E1205 19:04:14.446689 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.492642 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.492695 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.492710 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.492728 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.492740 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:14Z","lastTransitionTime":"2025-12-05T19:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.595603 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.595644 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.595658 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.595710 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.595722 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:14Z","lastTransitionTime":"2025-12-05T19:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.698417 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.698477 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.698495 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.698517 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.698535 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tzshq_1be569ff-0725-412f-ac1a-da4f5077bc17/ovnkube-controller/1.log" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.698533 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:14Z","lastTransitionTime":"2025-12-05T19:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.699439 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tzshq_1be569ff-0725-412f-ac1a-da4f5077bc17/ovnkube-controller/0.log" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.703202 4828 generic.go:334] "Generic (PLEG): container finished" podID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerID="4ce2785deb49a5b09fd305b8c9c4aa4bc10baf68b1a3bc48fe2d07bc947b4771" exitCode=1 Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.703238 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" event={"ID":"1be569ff-0725-412f-ac1a-da4f5077bc17","Type":"ContainerDied","Data":"4ce2785deb49a5b09fd305b8c9c4aa4bc10baf68b1a3bc48fe2d07bc947b4771"} Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.703275 4828 scope.go:117] "RemoveContainer" containerID="1ba523edfc6ffbd7899f2bfd11f18be350f3e8529d357520c9214630dd82c39f" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.703976 4828 scope.go:117] "RemoveContainer" containerID="4ce2785deb49a5b09fd305b8c9c4aa4bc10baf68b1a3bc48fe2d07bc947b4771" Dec 05 19:04:14 crc kubenswrapper[4828]: E1205 19:04:14.704130 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tzshq_openshift-ovn-kubernetes(1be569ff-0725-412f-ac1a-da4f5077bc17)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.727385 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ce2785deb49a5b09fd305b8c9c4aa4bc10baf68b1a3bc48fe2d07bc947b4771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ba523edfc6ffbd7899f2bfd11f18be350f3e8529d357520c9214630dd82c39f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:12Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1205 19:04:12.176435 6139 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 19:04:12.176483 6139 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1205 19:04:12.176561 6139 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 19:04:12.176595 6139 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 19:04:12.176601 6139 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 19:04:12.176617 6139 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1205 19:04:12.176630 6139 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1205 19:04:12.176665 6139 factory.go:656] Stopping watch factory\\\\nI1205 19:04:12.176689 6139 ovnkube.go:599] Stopped ovnkube\\\\nI1205 19:04:12.176721 6139 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1205 19:04:12.176743 6139 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1205 19:04:12.176751 6139 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1205 19:04:12.176760 6139 handler.go:208] Removed *v1.Node event handler 2\\\\nI1205 19:04:12.176767 6139 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 19:04:12.176775 6139 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1205 19\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ce2785deb49a5b09fd305b8c9c4aa4bc10baf68b1a3bc48fe2d07bc947b4771\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"message\\\":\\\"le-plugin-85b44fc459-gdk6g\\\\nI1205 19:04:13.382483 6266 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-phlsx after 0 failed attempt(s)\\\\nI1205 19:04:13.382424 6266 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1205 19:04:13.382492 6266 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-phlsx\\\\nI1205 19:04:13.382495 6266 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1205 19:04:13.382443 6266 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-ksv4w after 0 failed attempt(s)\\\\nI1205 19:04:13.382506 6266 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-ksv4w\\\\nI1205 19:04:13.382506 6266 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF1205 19:04:13.382506 6266 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:14Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.738062 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:14Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.750069 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:14Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.761516 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:14Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.775122 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:14Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.790901 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:14Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.801309 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.801343 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.801352 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.801368 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.801377 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:14Z","lastTransitionTime":"2025-12-05T19:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.806013 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:14Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.817679 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:14Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.832247 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:14Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.860900 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:14Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.880581 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:14Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.892725 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:14Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.903521 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.903550 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.903560 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.903573 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.903584 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:14Z","lastTransitionTime":"2025-12-05T19:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.905228 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:14Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.921120 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c81d9475b5e3b88e21ba70262d2c74c28a1907cf0241e00ce4eb57a70385e706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:14Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:14 crc kubenswrapper[4828]: I1205 19:04:14.935958 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:14Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.006326 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.006394 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.006407 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.006446 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.006460 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:15Z","lastTransitionTime":"2025-12-05T19:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.029682 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt"] Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.030141 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.033132 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.033292 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.049240 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.063219 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.084849 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.096047 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.109293 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.109339 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.109350 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.109369 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.109393 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:15Z","lastTransitionTime":"2025-12-05T19:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.110098 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44935bbd-b8fe-44ed-93ac-86eed967e178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dthbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.124726 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.143257 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.159126 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.161620 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/44935bbd-b8fe-44ed-93ac-86eed967e178-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-dthbt\" (UID: \"44935bbd-b8fe-44ed-93ac-86eed967e178\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.161699 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/44935bbd-b8fe-44ed-93ac-86eed967e178-env-overrides\") pod \"ovnkube-control-plane-749d76644c-dthbt\" (UID: \"44935bbd-b8fe-44ed-93ac-86eed967e178\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.161737 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjq74\" (UniqueName: \"kubernetes.io/projected/44935bbd-b8fe-44ed-93ac-86eed967e178-kube-api-access-mjq74\") pod \"ovnkube-control-plane-749d76644c-dthbt\" (UID: \"44935bbd-b8fe-44ed-93ac-86eed967e178\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.161772 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/44935bbd-b8fe-44ed-93ac-86eed967e178-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-dthbt\" (UID: \"44935bbd-b8fe-44ed-93ac-86eed967e178\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.171539 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.184872 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.208163 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.212095 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.212127 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.212136 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.212151 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.212176 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:15Z","lastTransitionTime":"2025-12-05T19:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.228288 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.242383 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.256318 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.263068 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/44935bbd-b8fe-44ed-93ac-86eed967e178-env-overrides\") pod \"ovnkube-control-plane-749d76644c-dthbt\" (UID: \"44935bbd-b8fe-44ed-93ac-86eed967e178\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.263143 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjq74\" (UniqueName: \"kubernetes.io/projected/44935bbd-b8fe-44ed-93ac-86eed967e178-kube-api-access-mjq74\") pod \"ovnkube-control-plane-749d76644c-dthbt\" (UID: \"44935bbd-b8fe-44ed-93ac-86eed967e178\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.263183 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/44935bbd-b8fe-44ed-93ac-86eed967e178-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-dthbt\" (UID: \"44935bbd-b8fe-44ed-93ac-86eed967e178\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.263228 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/44935bbd-b8fe-44ed-93ac-86eed967e178-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-dthbt\" (UID: \"44935bbd-b8fe-44ed-93ac-86eed967e178\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.263708 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/44935bbd-b8fe-44ed-93ac-86eed967e178-env-overrides\") pod \"ovnkube-control-plane-749d76644c-dthbt\" (UID: \"44935bbd-b8fe-44ed-93ac-86eed967e178\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.264058 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/44935bbd-b8fe-44ed-93ac-86eed967e178-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-dthbt\" (UID: \"44935bbd-b8fe-44ed-93ac-86eed967e178\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.268810 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c81d9475b5e3b88e21ba70262d2c74c28a1907cf0241e00ce4eb57a70385e706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.269334 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/44935bbd-b8fe-44ed-93ac-86eed967e178-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-dthbt\" (UID: \"44935bbd-b8fe-44ed-93ac-86eed967e178\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.282695 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjq74\" (UniqueName: \"kubernetes.io/projected/44935bbd-b8fe-44ed-93ac-86eed967e178-kube-api-access-mjq74\") pod \"ovnkube-control-plane-749d76644c-dthbt\" (UID: \"44935bbd-b8fe-44ed-93ac-86eed967e178\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.286930 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ce2785deb49a5b09fd305b8c9c4aa4bc10baf68b1a3bc48fe2d07bc947b4771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ba523edfc6ffbd7899f2bfd11f18be350f3e8529d357520c9214630dd82c39f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:12Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1205 19:04:12.176435 6139 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 19:04:12.176483 6139 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1205 19:04:12.176561 6139 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 19:04:12.176595 6139 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 19:04:12.176601 6139 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 19:04:12.176617 6139 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1205 19:04:12.176630 6139 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1205 19:04:12.176665 6139 factory.go:656] Stopping watch factory\\\\nI1205 19:04:12.176689 6139 ovnkube.go:599] Stopped ovnkube\\\\nI1205 19:04:12.176721 6139 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1205 19:04:12.176743 6139 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1205 19:04:12.176751 6139 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1205 19:04:12.176760 6139 handler.go:208] Removed *v1.Node event handler 2\\\\nI1205 19:04:12.176767 6139 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 19:04:12.176775 6139 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1205 19\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ce2785deb49a5b09fd305b8c9c4aa4bc10baf68b1a3bc48fe2d07bc947b4771\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"message\\\":\\\"le-plugin-85b44fc459-gdk6g\\\\nI1205 19:04:13.382483 6266 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-phlsx after 0 failed attempt(s)\\\\nI1205 19:04:13.382424 6266 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1205 19:04:13.382492 6266 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-phlsx\\\\nI1205 19:04:13.382495 6266 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1205 19:04:13.382443 6266 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-ksv4w after 0 failed attempt(s)\\\\nI1205 19:04:13.382506 6266 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-ksv4w\\\\nI1205 19:04:13.382506 6266 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF1205 19:04:13.382506 6266 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.315104 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.315149 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.315162 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.315183 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.315195 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:15Z","lastTransitionTime":"2025-12-05T19:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.345020 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" Dec 05 19:04:15 crc kubenswrapper[4828]: W1205 19:04:15.362444 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44935bbd_b8fe_44ed_93ac_86eed967e178.slice/crio-2e559fbe41f10588a463b8d9b591d521d291226e8862239beb221f2c6f6e5ce3 WatchSource:0}: Error finding container 2e559fbe41f10588a463b8d9b591d521d291226e8862239beb221f2c6f6e5ce3: Status 404 returned error can't find the container with id 2e559fbe41f10588a463b8d9b591d521d291226e8862239beb221f2c6f6e5ce3 Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.417421 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.417462 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.417473 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.417489 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.417501 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:15Z","lastTransitionTime":"2025-12-05T19:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.523018 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.523082 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.523097 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.523116 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.523128 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:15Z","lastTransitionTime":"2025-12-05T19:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.625876 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.625916 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.625928 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.625946 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.625958 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:15Z","lastTransitionTime":"2025-12-05T19:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.710259 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" event={"ID":"44935bbd-b8fe-44ed-93ac-86eed967e178","Type":"ContainerStarted","Data":"2e559fbe41f10588a463b8d9b591d521d291226e8862239beb221f2c6f6e5ce3"} Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.712354 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tzshq_1be569ff-0725-412f-ac1a-da4f5077bc17/ovnkube-controller/1.log" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.716677 4828 scope.go:117] "RemoveContainer" containerID="4ce2785deb49a5b09fd305b8c9c4aa4bc10baf68b1a3bc48fe2d07bc947b4771" Dec 05 19:04:15 crc kubenswrapper[4828]: E1205 19:04:15.716873 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tzshq_openshift-ovn-kubernetes(1be569ff-0725-412f-ac1a-da4f5077bc17)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.728076 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.728133 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.728146 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.728170 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.728188 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:15Z","lastTransitionTime":"2025-12-05T19:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.737790 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.749916 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.762527 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.782011 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c81d9475b5e3b88e21ba70262d2c74c28a1907cf0241e00ce4eb57a70385e706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.801756 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ce2785deb49a5b09fd305b8c9c4aa4bc10baf68b1a3bc48fe2d07bc947b4771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ce2785deb49a5b09fd305b8c9c4aa4bc10baf68b1a3bc48fe2d07bc947b4771\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"message\\\":\\\"le-plugin-85b44fc459-gdk6g\\\\nI1205 19:04:13.382483 6266 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-phlsx after 0 failed attempt(s)\\\\nI1205 19:04:13.382424 6266 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1205 19:04:13.382492 6266 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-phlsx\\\\nI1205 19:04:13.382495 6266 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1205 19:04:13.382443 6266 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-ksv4w after 0 failed attempt(s)\\\\nI1205 19:04:13.382506 6266 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-ksv4w\\\\nI1205 19:04:13.382506 6266 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF1205 19:04:13.382506 6266 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tzshq_openshift-ovn-kubernetes(1be569ff-0725-412f-ac1a-da4f5077bc17)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.815220 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.831087 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.831403 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.831527 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.831660 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.831306 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.831819 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:15Z","lastTransitionTime":"2025-12-05T19:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.841930 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.853024 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.864662 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.875642 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44935bbd-b8fe-44ed-93ac-86eed967e178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dthbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.900022 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.912887 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.930710 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.934500 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.934555 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.934575 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.934599 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.934619 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:15Z","lastTransitionTime":"2025-12-05T19:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.945606 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:15 crc kubenswrapper[4828]: I1205 19:04:15.958426 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.036935 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.036972 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.036984 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.037002 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.037014 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:16Z","lastTransitionTime":"2025-12-05T19:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.139193 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.139233 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.139243 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.139259 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.139268 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:16Z","lastTransitionTime":"2025-12-05T19:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.173420 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:16 crc kubenswrapper[4828]: E1205 19:04:16.173598 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 19:04:16 crc kubenswrapper[4828]: E1205 19:04:16.173628 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 19:04:16 crc kubenswrapper[4828]: E1205 19:04:16.173640 4828 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:04:16 crc kubenswrapper[4828]: E1205 19:04:16.173696 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 19:04:32.173681267 +0000 UTC m=+50.068903573 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.241386 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.241423 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.241432 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.241446 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.241455 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:16Z","lastTransitionTime":"2025-12-05T19:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.273992 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.274092 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.274115 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:16 crc kubenswrapper[4828]: E1205 19:04:16.274173 4828 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 19:04:16 crc kubenswrapper[4828]: E1205 19:04:16.274134 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:04:32.274119432 +0000 UTC m=+50.169341728 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:04:16 crc kubenswrapper[4828]: E1205 19:04:16.274204 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 19:04:32.274196844 +0000 UTC m=+50.169419150 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.274217 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:16 crc kubenswrapper[4828]: E1205 19:04:16.274275 4828 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 19:04:16 crc kubenswrapper[4828]: E1205 19:04:16.274286 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 19:04:16 crc kubenswrapper[4828]: E1205 19:04:16.274300 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 19:04:16 crc kubenswrapper[4828]: E1205 19:04:16.274310 4828 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:04:16 crc kubenswrapper[4828]: E1205 19:04:16.274319 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 19:04:32.274308268 +0000 UTC m=+50.169530574 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 19:04:16 crc kubenswrapper[4828]: E1205 19:04:16.274333 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 19:04:32.274326628 +0000 UTC m=+50.169548934 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.344004 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.344036 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.344046 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.344062 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.344073 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:16Z","lastTransitionTime":"2025-12-05T19:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.445572 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.445644 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.445722 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:16 crc kubenswrapper[4828]: E1205 19:04:16.445728 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:04:16 crc kubenswrapper[4828]: E1205 19:04:16.445882 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:04:16 crc kubenswrapper[4828]: E1205 19:04:16.446018 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.447162 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.447232 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.447256 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.447290 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.447319 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:16Z","lastTransitionTime":"2025-12-05T19:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.549348 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.549433 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.549455 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.549487 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.549510 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:16Z","lastTransitionTime":"2025-12-05T19:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.653014 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.653059 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.653070 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.653088 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.653099 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:16Z","lastTransitionTime":"2025-12-05T19:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.722029 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" event={"ID":"44935bbd-b8fe-44ed-93ac-86eed967e178","Type":"ContainerStarted","Data":"fa1717f608fb766a765e800128eb9bc99275b75519406b39bf88eb811ea1ea78"} Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.722075 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" event={"ID":"44935bbd-b8fe-44ed-93ac-86eed967e178","Type":"ContainerStarted","Data":"5d9fcf44679b22e61763bd01648f92ef1e645b40edd18e5f5f1d577bdd75952b"} Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.748628 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ce2785deb49a5b09fd305b8c9c4aa4bc10baf68b1a3bc48fe2d07bc947b4771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ce2785deb49a5b09fd305b8c9c4aa4bc10baf68b1a3bc48fe2d07bc947b4771\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"message\\\":\\\"le-plugin-85b44fc459-gdk6g\\\\nI1205 19:04:13.382483 6266 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-phlsx after 0 failed attempt(s)\\\\nI1205 19:04:13.382424 6266 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1205 19:04:13.382492 6266 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-phlsx\\\\nI1205 19:04:13.382495 6266 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1205 19:04:13.382443 6266 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-ksv4w after 0 failed attempt(s)\\\\nI1205 19:04:13.382506 6266 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-ksv4w\\\\nI1205 19:04:13.382506 6266 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF1205 19:04:13.382506 6266 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tzshq_openshift-ovn-kubernetes(1be569ff-0725-412f-ac1a-da4f5077bc17)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:16Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.756524 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.756600 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.756650 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.756681 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.756704 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:16Z","lastTransitionTime":"2025-12-05T19:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.772402 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:16Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.787137 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:16Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.798160 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:16Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.809963 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:16Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.823101 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:16Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.835899 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44935bbd-b8fe-44ed-93ac-86eed967e178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fcf44679b22e61763bd01648f92ef1e645b40edd18e5f5f1d577bdd75952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1717f608fb766a765e800128eb9bc99275b75519406b39bf88eb811ea1ea78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dthbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:16Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.854104 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-bvf6n"] Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.854651 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:16 crc kubenswrapper[4828]: E1205 19:04:16.854742 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.858791 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:16Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.859212 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.859238 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.859251 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.859265 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.859275 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:16Z","lastTransitionTime":"2025-12-05T19:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.874575 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:16Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.885811 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:16Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.900512 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:16Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.915556 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:16Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.928973 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:16Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.941457 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:16Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.954757 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:16Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.961909 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.961962 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.961977 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.961997 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.962011 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:16Z","lastTransitionTime":"2025-12-05T19:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.969400 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c81d9475b5e3b88e21ba70262d2c74c28a1907cf0241e00ce4eb57a70385e706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:16Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.979699 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:16Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.980147 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4bxb\" (UniqueName: \"kubernetes.io/projected/0595333b-a181-4a2b-90b8-e2accf80e78e-kube-api-access-t4bxb\") pod \"network-metrics-daemon-bvf6n\" (UID: \"0595333b-a181-4a2b-90b8-e2accf80e78e\") " pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.980289 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0595333b-a181-4a2b-90b8-e2accf80e78e-metrics-certs\") pod \"network-metrics-daemon-bvf6n\" (UID: \"0595333b-a181-4a2b-90b8-e2accf80e78e\") " pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.989985 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:16Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:16 crc kubenswrapper[4828]: I1205 19:04:16.999590 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:16Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.023111 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:17Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.039258 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:17Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.054647 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:17Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.064716 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.064751 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.064759 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.064772 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.064781 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:17Z","lastTransitionTime":"2025-12-05T19:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.068267 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:17Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.081666 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0595333b-a181-4a2b-90b8-e2accf80e78e-metrics-certs\") pod \"network-metrics-daemon-bvf6n\" (UID: \"0595333b-a181-4a2b-90b8-e2accf80e78e\") " pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.081711 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4bxb\" (UniqueName: \"kubernetes.io/projected/0595333b-a181-4a2b-90b8-e2accf80e78e-kube-api-access-t4bxb\") pod \"network-metrics-daemon-bvf6n\" (UID: \"0595333b-a181-4a2b-90b8-e2accf80e78e\") " pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:17 crc kubenswrapper[4828]: E1205 19:04:17.081967 4828 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 19:04:17 crc kubenswrapper[4828]: E1205 19:04:17.082073 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0595333b-a181-4a2b-90b8-e2accf80e78e-metrics-certs podName:0595333b-a181-4a2b-90b8-e2accf80e78e nodeName:}" failed. No retries permitted until 2025-12-05 19:04:17.582043871 +0000 UTC m=+35.477266217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0595333b-a181-4a2b-90b8-e2accf80e78e-metrics-certs") pod "network-metrics-daemon-bvf6n" (UID: "0595333b-a181-4a2b-90b8-e2accf80e78e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.084252 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c81d9475b5e3b88e21ba70262d2c74c28a1907cf0241e00ce4eb57a70385e706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:17Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.097037 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:17Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.101056 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4bxb\" (UniqueName: \"kubernetes.io/projected/0595333b-a181-4a2b-90b8-e2accf80e78e-kube-api-access-t4bxb\") pod \"network-metrics-daemon-bvf6n\" (UID: \"0595333b-a181-4a2b-90b8-e2accf80e78e\") " pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.110778 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvf6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0595333b-a181-4a2b-90b8-e2accf80e78e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvf6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:17Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.133524 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ce2785deb49a5b09fd305b8c9c4aa4bc10baf68b1a3bc48fe2d07bc947b4771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ce2785deb49a5b09fd305b8c9c4aa4bc10baf68b1a3bc48fe2d07bc947b4771\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"message\\\":\\\"le-plugin-85b44fc459-gdk6g\\\\nI1205 19:04:13.382483 6266 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-phlsx after 0 failed attempt(s)\\\\nI1205 19:04:13.382424 6266 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1205 19:04:13.382492 6266 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-phlsx\\\\nI1205 19:04:13.382495 6266 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1205 19:04:13.382443 6266 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-ksv4w after 0 failed attempt(s)\\\\nI1205 19:04:13.382506 6266 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-ksv4w\\\\nI1205 19:04:13.382506 6266 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF1205 19:04:13.382506 6266 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tzshq_openshift-ovn-kubernetes(1be569ff-0725-412f-ac1a-da4f5077bc17)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:17Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.144094 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:17Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.157715 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:17Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.167252 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.167281 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.167288 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.167301 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.167309 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:17Z","lastTransitionTime":"2025-12-05T19:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.169356 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:17Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.181996 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44935bbd-b8fe-44ed-93ac-86eed967e178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fcf44679b22e61763bd01648f92ef1e645b40edd18e5f5f1d577bdd75952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1717f608fb766a765e800128eb9bc99275b75519406b39bf88eb811ea1ea78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dthbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:17Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.196524 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:17Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.209048 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:17Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.270157 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.270205 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.270216 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.270234 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.270248 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:17Z","lastTransitionTime":"2025-12-05T19:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.372703 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.372764 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.372782 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.372803 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.372817 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:17Z","lastTransitionTime":"2025-12-05T19:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.475900 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.475943 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.475954 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.475968 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.475980 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:17Z","lastTransitionTime":"2025-12-05T19:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.578684 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.578761 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.578784 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.578818 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.578876 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:17Z","lastTransitionTime":"2025-12-05T19:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.586522 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0595333b-a181-4a2b-90b8-e2accf80e78e-metrics-certs\") pod \"network-metrics-daemon-bvf6n\" (UID: \"0595333b-a181-4a2b-90b8-e2accf80e78e\") " pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:17 crc kubenswrapper[4828]: E1205 19:04:17.586913 4828 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 19:04:17 crc kubenswrapper[4828]: E1205 19:04:17.586993 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0595333b-a181-4a2b-90b8-e2accf80e78e-metrics-certs podName:0595333b-a181-4a2b-90b8-e2accf80e78e nodeName:}" failed. No retries permitted until 2025-12-05 19:04:18.586969926 +0000 UTC m=+36.482192262 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0595333b-a181-4a2b-90b8-e2accf80e78e-metrics-certs") pod "network-metrics-daemon-bvf6n" (UID: "0595333b-a181-4a2b-90b8-e2accf80e78e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.682501 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.682560 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.682576 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.682603 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.682621 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:17Z","lastTransitionTime":"2025-12-05T19:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.786090 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.786189 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.786246 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.786269 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.786316 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:17Z","lastTransitionTime":"2025-12-05T19:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.888388 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.888488 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.888506 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.888568 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.888592 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:17Z","lastTransitionTime":"2025-12-05T19:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.990900 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.990939 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.990951 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.990968 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:17 crc kubenswrapper[4828]: I1205 19:04:17.990979 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:17Z","lastTransitionTime":"2025-12-05T19:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.093022 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.093059 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.093067 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.093080 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.093090 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:18Z","lastTransitionTime":"2025-12-05T19:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.194945 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.194977 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.194986 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.194999 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.195009 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:18Z","lastTransitionTime":"2025-12-05T19:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.297407 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.297651 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.297772 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.297910 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.298032 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:18Z","lastTransitionTime":"2025-12-05T19:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.401354 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.401647 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.401774 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.401946 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.402134 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:18Z","lastTransitionTime":"2025-12-05T19:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.446522 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:18 crc kubenswrapper[4828]: E1205 19:04:18.446715 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.447086 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.447163 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:18 crc kubenswrapper[4828]: E1205 19:04:18.447260 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.447305 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:18 crc kubenswrapper[4828]: E1205 19:04:18.447423 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:04:18 crc kubenswrapper[4828]: E1205 19:04:18.447553 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.504890 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.505795 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.506009 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.506162 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.506298 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:18Z","lastTransitionTime":"2025-12-05T19:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.598482 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0595333b-a181-4a2b-90b8-e2accf80e78e-metrics-certs\") pod \"network-metrics-daemon-bvf6n\" (UID: \"0595333b-a181-4a2b-90b8-e2accf80e78e\") " pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:18 crc kubenswrapper[4828]: E1205 19:04:18.598641 4828 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 19:04:18 crc kubenswrapper[4828]: E1205 19:04:18.598712 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0595333b-a181-4a2b-90b8-e2accf80e78e-metrics-certs podName:0595333b-a181-4a2b-90b8-e2accf80e78e nodeName:}" failed. No retries permitted until 2025-12-05 19:04:20.598695057 +0000 UTC m=+38.493917373 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0595333b-a181-4a2b-90b8-e2accf80e78e-metrics-certs") pod "network-metrics-daemon-bvf6n" (UID: "0595333b-a181-4a2b-90b8-e2accf80e78e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.608972 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.609033 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.609055 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.609080 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.609104 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:18Z","lastTransitionTime":"2025-12-05T19:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.712105 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.712207 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.712228 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.712255 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.712274 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:18Z","lastTransitionTime":"2025-12-05T19:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.814946 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.814973 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.814983 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.814995 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.815003 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:18Z","lastTransitionTime":"2025-12-05T19:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.917967 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.918025 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.918044 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.918064 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:18 crc kubenswrapper[4828]: I1205 19:04:18.918077 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:18Z","lastTransitionTime":"2025-12-05T19:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.020887 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.020932 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.020977 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.020995 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.021008 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:19Z","lastTransitionTime":"2025-12-05T19:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.123603 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.123670 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.123689 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.123717 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.123762 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:19Z","lastTransitionTime":"2025-12-05T19:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.226385 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.226474 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.226495 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.226524 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.226546 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:19Z","lastTransitionTime":"2025-12-05T19:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.330145 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.330222 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.330234 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.330251 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.330262 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:19Z","lastTransitionTime":"2025-12-05T19:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.434564 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.434682 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.434718 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.434749 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.434774 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:19Z","lastTransitionTime":"2025-12-05T19:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.537428 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.537522 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.537558 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.537587 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.537610 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:19Z","lastTransitionTime":"2025-12-05T19:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.641257 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.641331 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.641353 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.641383 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.641407 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:19Z","lastTransitionTime":"2025-12-05T19:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.743754 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.743884 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.743927 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.743972 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.743995 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:19Z","lastTransitionTime":"2025-12-05T19:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.847964 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.848027 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.848049 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.848093 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.848117 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:19Z","lastTransitionTime":"2025-12-05T19:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.950868 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.950904 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.950940 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.950957 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:19 crc kubenswrapper[4828]: I1205 19:04:19.950968 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:19Z","lastTransitionTime":"2025-12-05T19:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.053472 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.053545 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.053570 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.053601 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.053626 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:20Z","lastTransitionTime":"2025-12-05T19:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.155966 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.156017 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.156032 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.156061 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.156083 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:20Z","lastTransitionTime":"2025-12-05T19:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.258448 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.258503 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.258512 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.258526 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.258534 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:20Z","lastTransitionTime":"2025-12-05T19:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.362105 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.362190 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.362211 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.362240 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.362264 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:20Z","lastTransitionTime":"2025-12-05T19:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.445546 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.445578 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.445615 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:20 crc kubenswrapper[4828]: E1205 19:04:20.445678 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:04:20 crc kubenswrapper[4828]: E1205 19:04:20.445851 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.445900 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:20 crc kubenswrapper[4828]: E1205 19:04:20.446043 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:04:20 crc kubenswrapper[4828]: E1205 19:04:20.446101 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.464744 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.464869 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.464887 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.464904 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.464916 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:20Z","lastTransitionTime":"2025-12-05T19:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.567571 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.567613 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.567649 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.567669 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.567681 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:20Z","lastTransitionTime":"2025-12-05T19:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.622139 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0595333b-a181-4a2b-90b8-e2accf80e78e-metrics-certs\") pod \"network-metrics-daemon-bvf6n\" (UID: \"0595333b-a181-4a2b-90b8-e2accf80e78e\") " pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:20 crc kubenswrapper[4828]: E1205 19:04:20.622414 4828 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 19:04:20 crc kubenswrapper[4828]: E1205 19:04:20.622546 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0595333b-a181-4a2b-90b8-e2accf80e78e-metrics-certs podName:0595333b-a181-4a2b-90b8-e2accf80e78e nodeName:}" failed. No retries permitted until 2025-12-05 19:04:24.622512039 +0000 UTC m=+42.517734375 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0595333b-a181-4a2b-90b8-e2accf80e78e-metrics-certs") pod "network-metrics-daemon-bvf6n" (UID: "0595333b-a181-4a2b-90b8-e2accf80e78e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.670660 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.670699 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.670711 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.670727 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.670739 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:20Z","lastTransitionTime":"2025-12-05T19:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.777783 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.778066 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.778147 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.778262 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.778336 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:20Z","lastTransitionTime":"2025-12-05T19:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.880547 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.880588 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.880600 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.880617 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.880630 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:20Z","lastTransitionTime":"2025-12-05T19:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.983186 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.983240 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.983253 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.983272 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:20 crc kubenswrapper[4828]: I1205 19:04:20.983284 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:20Z","lastTransitionTime":"2025-12-05T19:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.086281 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.086324 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.086335 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.086351 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.086362 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:21Z","lastTransitionTime":"2025-12-05T19:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.189438 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.189488 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.189506 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.189530 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.189546 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:21Z","lastTransitionTime":"2025-12-05T19:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.293005 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.293065 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.293082 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.293104 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.293121 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:21Z","lastTransitionTime":"2025-12-05T19:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.396398 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.396474 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.396497 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.396526 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.396550 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:21Z","lastTransitionTime":"2025-12-05T19:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.499203 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.499295 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.499322 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.499352 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.499395 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:21Z","lastTransitionTime":"2025-12-05T19:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.602625 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.602994 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.603134 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.603289 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.603443 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:21Z","lastTransitionTime":"2025-12-05T19:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.706500 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.706569 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.706592 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.706639 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.706661 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:21Z","lastTransitionTime":"2025-12-05T19:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.809912 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.809987 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.810037 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.810082 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.810104 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:21Z","lastTransitionTime":"2025-12-05T19:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.913451 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.913519 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.913532 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.913550 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:21 crc kubenswrapper[4828]: I1205 19:04:21.913562 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:21Z","lastTransitionTime":"2025-12-05T19:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.016672 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.016741 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.016759 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.016784 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.016801 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:22Z","lastTransitionTime":"2025-12-05T19:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.119556 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.119640 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.119657 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.119682 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.119700 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:22Z","lastTransitionTime":"2025-12-05T19:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.222202 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.222279 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.222296 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.222321 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.222345 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:22Z","lastTransitionTime":"2025-12-05T19:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.325196 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.325249 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.325258 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.325276 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.325289 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:22Z","lastTransitionTime":"2025-12-05T19:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.428546 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.428578 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.428587 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.428601 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.428612 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:22Z","lastTransitionTime":"2025-12-05T19:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.445463 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:22 crc kubenswrapper[4828]: E1205 19:04:22.445583 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.445472 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.445606 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.445463 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:22 crc kubenswrapper[4828]: E1205 19:04:22.445651 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:04:22 crc kubenswrapper[4828]: E1205 19:04:22.445708 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:04:22 crc kubenswrapper[4828]: E1205 19:04:22.445889 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.466528 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.480441 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.494080 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.508233 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.530905 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.531151 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.531279 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.531376 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.531459 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:22Z","lastTransitionTime":"2025-12-05T19:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.533035 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.545605 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.560620 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.575367 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.588928 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c81d9475b5e3b88e21ba70262d2c74c28a1907cf0241e00ce4eb57a70385e706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.612784 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ce2785deb49a5b09fd305b8c9c4aa4bc10baf68b1a3bc48fe2d07bc947b4771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ce2785deb49a5b09fd305b8c9c4aa4bc10baf68b1a3bc48fe2d07bc947b4771\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"message\\\":\\\"le-plugin-85b44fc459-gdk6g\\\\nI1205 19:04:13.382483 6266 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-phlsx after 0 failed attempt(s)\\\\nI1205 19:04:13.382424 6266 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1205 19:04:13.382492 6266 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-phlsx\\\\nI1205 19:04:13.382495 6266 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1205 19:04:13.382443 6266 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-ksv4w after 0 failed attempt(s)\\\\nI1205 19:04:13.382506 6266 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-ksv4w\\\\nI1205 19:04:13.382506 6266 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF1205 19:04:13.382506 6266 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tzshq_openshift-ovn-kubernetes(1be569ff-0725-412f-ac1a-da4f5077bc17)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.624106 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvf6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0595333b-a181-4a2b-90b8-e2accf80e78e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvf6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.634295 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.634340 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.634351 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.634369 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.634381 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:22Z","lastTransitionTime":"2025-12-05T19:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.640702 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.653436 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.665069 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.677262 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.691693 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44935bbd-b8fe-44ed-93ac-86eed967e178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fcf44679b22e61763bd01648f92ef1e645b40edd18e5f5f1d577bdd75952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1717f608fb766a765e800128eb9bc99275b75519406b39bf88eb811ea1ea78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dthbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.708945 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.736693 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.736732 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.736743 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.736761 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.736773 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:22Z","lastTransitionTime":"2025-12-05T19:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.838514 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.838546 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.838555 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.838567 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.838575 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:22Z","lastTransitionTime":"2025-12-05T19:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.941002 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.941027 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.941035 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.941048 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:22 crc kubenswrapper[4828]: I1205 19:04:22.941059 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:22Z","lastTransitionTime":"2025-12-05T19:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.043698 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.043771 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.043785 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.043802 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.043814 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:23Z","lastTransitionTime":"2025-12-05T19:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.146664 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.146723 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.146743 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.146768 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.146786 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:23Z","lastTransitionTime":"2025-12-05T19:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.249286 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.249327 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.249339 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.249356 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.249366 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:23Z","lastTransitionTime":"2025-12-05T19:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.351985 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.352026 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.352037 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.352055 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.352068 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:23Z","lastTransitionTime":"2025-12-05T19:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.454486 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.454543 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.454562 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.454587 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.454605 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:23Z","lastTransitionTime":"2025-12-05T19:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.556784 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.556889 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.556909 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.556933 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.556950 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:23Z","lastTransitionTime":"2025-12-05T19:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.659052 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.659104 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.659121 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.659144 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.659162 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:23Z","lastTransitionTime":"2025-12-05T19:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.762456 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.762509 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.762526 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.762550 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.762567 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:23Z","lastTransitionTime":"2025-12-05T19:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.866363 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.866435 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.866462 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.866493 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.866510 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:23Z","lastTransitionTime":"2025-12-05T19:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.970129 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.970192 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.970214 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.970239 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:23 crc kubenswrapper[4828]: I1205 19:04:23.970260 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:23Z","lastTransitionTime":"2025-12-05T19:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.072501 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.072540 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.072547 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.072561 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.072572 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:24Z","lastTransitionTime":"2025-12-05T19:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.150200 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.150229 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.150240 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.150256 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.150267 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:24Z","lastTransitionTime":"2025-12-05T19:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:24 crc kubenswrapper[4828]: E1205 19:04:24.165560 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:24Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.168963 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.168986 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.168994 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.169006 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.169015 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:24Z","lastTransitionTime":"2025-12-05T19:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:24 crc kubenswrapper[4828]: E1205 19:04:24.183242 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:24Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.186602 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.186649 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.186668 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.186691 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.186708 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:24Z","lastTransitionTime":"2025-12-05T19:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:24 crc kubenswrapper[4828]: E1205 19:04:24.204195 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:24Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.207744 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.207790 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.207802 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.207840 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.207860 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:24Z","lastTransitionTime":"2025-12-05T19:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:24 crc kubenswrapper[4828]: E1205 19:04:24.221288 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:24Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.224672 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.224716 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.224730 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.224751 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.224767 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:24Z","lastTransitionTime":"2025-12-05T19:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:24 crc kubenswrapper[4828]: E1205 19:04:24.237987 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:24Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:24 crc kubenswrapper[4828]: E1205 19:04:24.238144 4828 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.239769 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.239852 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.239866 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.239908 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.239923 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:24Z","lastTransitionTime":"2025-12-05T19:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.343636 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.343707 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.343734 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.343763 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.343785 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:24Z","lastTransitionTime":"2025-12-05T19:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.445568 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:24 crc kubenswrapper[4828]: E1205 19:04:24.445747 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.446323 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.446645 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:24 crc kubenswrapper[4828]: E1205 19:04:24.446808 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.446920 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.446923 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:24 crc kubenswrapper[4828]: E1205 19:04:24.446992 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.447018 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.447095 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.447159 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.447180 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:24Z","lastTransitionTime":"2025-12-05T19:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:24 crc kubenswrapper[4828]: E1205 19:04:24.447643 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.549928 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.550043 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.550053 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.550068 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.550078 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:24Z","lastTransitionTime":"2025-12-05T19:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.652644 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.652716 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.652731 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.652750 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.652761 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:24Z","lastTransitionTime":"2025-12-05T19:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.666333 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0595333b-a181-4a2b-90b8-e2accf80e78e-metrics-certs\") pod \"network-metrics-daemon-bvf6n\" (UID: \"0595333b-a181-4a2b-90b8-e2accf80e78e\") " pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:24 crc kubenswrapper[4828]: E1205 19:04:24.666531 4828 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 19:04:24 crc kubenswrapper[4828]: E1205 19:04:24.666664 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0595333b-a181-4a2b-90b8-e2accf80e78e-metrics-certs podName:0595333b-a181-4a2b-90b8-e2accf80e78e nodeName:}" failed. No retries permitted until 2025-12-05 19:04:32.666636446 +0000 UTC m=+50.561858802 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0595333b-a181-4a2b-90b8-e2accf80e78e-metrics-certs") pod "network-metrics-daemon-bvf6n" (UID: "0595333b-a181-4a2b-90b8-e2accf80e78e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.755548 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.755581 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.755589 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.755604 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.755614 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:24Z","lastTransitionTime":"2025-12-05T19:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.858692 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.858727 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.858738 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.858751 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.858759 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:24Z","lastTransitionTime":"2025-12-05T19:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.960763 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.960799 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.960812 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.960852 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:24 crc kubenswrapper[4828]: I1205 19:04:24.960864 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:24Z","lastTransitionTime":"2025-12-05T19:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.064078 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.064122 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.064133 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.064151 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.064166 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:25Z","lastTransitionTime":"2025-12-05T19:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.166817 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.166925 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.166952 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.166972 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.166984 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:25Z","lastTransitionTime":"2025-12-05T19:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.269969 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.270040 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.270058 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.270081 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.270098 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:25Z","lastTransitionTime":"2025-12-05T19:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.373270 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.373366 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.373386 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.373413 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.373431 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:25Z","lastTransitionTime":"2025-12-05T19:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.476625 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.476770 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.476791 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.476841 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.476860 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:25Z","lastTransitionTime":"2025-12-05T19:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.580343 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.580421 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.580444 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.580473 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.580491 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:25Z","lastTransitionTime":"2025-12-05T19:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.683955 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.684022 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.684045 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.684074 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.684095 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:25Z","lastTransitionTime":"2025-12-05T19:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.786627 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.786746 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.786815 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.786908 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.787005 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:25Z","lastTransitionTime":"2025-12-05T19:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.889072 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.889135 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.889151 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.889175 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.889192 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:25Z","lastTransitionTime":"2025-12-05T19:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.992206 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.992269 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.992285 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.992310 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:25 crc kubenswrapper[4828]: I1205 19:04:25.992328 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:25Z","lastTransitionTime":"2025-12-05T19:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.095195 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.095233 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.095242 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.095257 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.095266 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:26Z","lastTransitionTime":"2025-12-05T19:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.198564 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.198646 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.198668 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.198699 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.198719 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:26Z","lastTransitionTime":"2025-12-05T19:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.301463 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.301528 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.301552 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.301581 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.301604 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:26Z","lastTransitionTime":"2025-12-05T19:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.404910 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.404947 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.404963 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.404979 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.404989 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:26Z","lastTransitionTime":"2025-12-05T19:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.446206 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.446218 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.446290 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.446528 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:26 crc kubenswrapper[4828]: E1205 19:04:26.446679 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:04:26 crc kubenswrapper[4828]: E1205 19:04:26.446864 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:04:26 crc kubenswrapper[4828]: E1205 19:04:26.447124 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:04:26 crc kubenswrapper[4828]: E1205 19:04:26.447295 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.506684 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.506725 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.506734 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.506749 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.506762 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:26Z","lastTransitionTime":"2025-12-05T19:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.609590 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.609637 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.609649 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.609667 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.609676 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:26Z","lastTransitionTime":"2025-12-05T19:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.712504 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.712571 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.712596 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.712624 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.712643 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:26Z","lastTransitionTime":"2025-12-05T19:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.814592 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.814635 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.814647 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.814664 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.814675 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:26Z","lastTransitionTime":"2025-12-05T19:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.917071 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.917112 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.917123 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.917135 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:26 crc kubenswrapper[4828]: I1205 19:04:26.917143 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:26Z","lastTransitionTime":"2025-12-05T19:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.020233 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.020275 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.020286 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.020303 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.020314 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:27Z","lastTransitionTime":"2025-12-05T19:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.123992 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.124055 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.124073 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.124100 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.124119 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:27Z","lastTransitionTime":"2025-12-05T19:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.226095 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.226142 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.226160 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.226180 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.226195 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:27Z","lastTransitionTime":"2025-12-05T19:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.329253 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.329314 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.329338 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.329367 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.329386 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:27Z","lastTransitionTime":"2025-12-05T19:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.432054 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.432092 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.432100 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.432116 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.432125 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:27Z","lastTransitionTime":"2025-12-05T19:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.534944 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.535018 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.535037 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.535063 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.535082 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:27Z","lastTransitionTime":"2025-12-05T19:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.637769 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.637934 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.638295 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.638376 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.638405 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:27Z","lastTransitionTime":"2025-12-05T19:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.742027 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.742078 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.742089 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.742108 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.742121 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:27Z","lastTransitionTime":"2025-12-05T19:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.843854 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.843890 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.843901 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.843918 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.843928 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:27Z","lastTransitionTime":"2025-12-05T19:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.947110 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.947165 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.947178 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.947196 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:27 crc kubenswrapper[4828]: I1205 19:04:27.947206 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:27Z","lastTransitionTime":"2025-12-05T19:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.043058 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.050497 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.050537 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.050546 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.050561 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.050571 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:28Z","lastTransitionTime":"2025-12-05T19:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.056073 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.063130 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:28Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.077130 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:28Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.088947 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:28Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.100077 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:28Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.116549 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:28Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.129850 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:28Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.141888 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:28Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.152017 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:28Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.153377 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.153432 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.153444 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.153461 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.153471 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:28Z","lastTransitionTime":"2025-12-05T19:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.165134 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c81d9475b5e3b88e21ba70262d2c74c28a1907cf0241e00ce4eb57a70385e706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:28Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.186211 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ce2785deb49a5b09fd305b8c9c4aa4bc10baf68b1a3bc48fe2d07bc947b4771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ce2785deb49a5b09fd305b8c9c4aa4bc10baf68b1a3bc48fe2d07bc947b4771\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"message\\\":\\\"le-plugin-85b44fc459-gdk6g\\\\nI1205 19:04:13.382483 6266 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-phlsx after 0 failed attempt(s)\\\\nI1205 19:04:13.382424 6266 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1205 19:04:13.382492 6266 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-phlsx\\\\nI1205 19:04:13.382495 6266 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1205 19:04:13.382443 6266 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-ksv4w after 0 failed attempt(s)\\\\nI1205 19:04:13.382506 6266 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-ksv4w\\\\nI1205 19:04:13.382506 6266 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF1205 19:04:13.382506 6266 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tzshq_openshift-ovn-kubernetes(1be569ff-0725-412f-ac1a-da4f5077bc17)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:28Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.195750 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvf6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0595333b-a181-4a2b-90b8-e2accf80e78e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvf6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:28Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.207577 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:28Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.215667 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:28Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.226212 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:28Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.238583 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:28Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.249263 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44935bbd-b8fe-44ed-93ac-86eed967e178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fcf44679b22e61763bd01648f92ef1e645b40edd18e5f5f1d577bdd75952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1717f608fb766a765e800128eb9bc99275b75519406b39bf88eb811ea1ea78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dthbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:28Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.255463 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.255492 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.255501 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.255515 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.255525 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:28Z","lastTransitionTime":"2025-12-05T19:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.262562 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:28Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.357891 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.357953 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.357974 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.358002 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.358022 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:28Z","lastTransitionTime":"2025-12-05T19:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.446110 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.446181 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.446143 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.446142 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:28 crc kubenswrapper[4828]: E1205 19:04:28.446302 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:04:28 crc kubenswrapper[4828]: E1205 19:04:28.446401 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:04:28 crc kubenswrapper[4828]: E1205 19:04:28.446626 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:04:28 crc kubenswrapper[4828]: E1205 19:04:28.446763 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.461226 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.461285 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.461307 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.461335 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.461356 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:28Z","lastTransitionTime":"2025-12-05T19:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.564659 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.564722 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.564739 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.564762 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.564781 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:28Z","lastTransitionTime":"2025-12-05T19:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.668144 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.668208 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.668228 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.668251 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.668271 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:28Z","lastTransitionTime":"2025-12-05T19:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.771150 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.771220 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.771241 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.771270 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.771292 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:28Z","lastTransitionTime":"2025-12-05T19:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.874105 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.874151 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.874168 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.874189 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.874204 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:28Z","lastTransitionTime":"2025-12-05T19:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.976789 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.976836 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.976845 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.976860 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:28 crc kubenswrapper[4828]: I1205 19:04:28.976872 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:28Z","lastTransitionTime":"2025-12-05T19:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.080330 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.080370 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.080378 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.080391 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.080400 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:29Z","lastTransitionTime":"2025-12-05T19:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.183448 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.183485 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.183495 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.183511 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.183525 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:29Z","lastTransitionTime":"2025-12-05T19:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.286561 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.286604 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.286616 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.286631 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.286642 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:29Z","lastTransitionTime":"2025-12-05T19:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.390030 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.390096 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.390127 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.390154 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.390172 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:29Z","lastTransitionTime":"2025-12-05T19:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.492555 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.492631 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.492648 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.492674 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.492703 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:29Z","lastTransitionTime":"2025-12-05T19:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.595306 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.595352 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.595360 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.595375 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.595384 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:29Z","lastTransitionTime":"2025-12-05T19:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.697668 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.697731 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.697750 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.697771 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.697788 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:29Z","lastTransitionTime":"2025-12-05T19:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.799928 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.800004 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.800020 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.800044 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.800062 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:29Z","lastTransitionTime":"2025-12-05T19:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.904462 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.904524 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.904537 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.904562 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:29 crc kubenswrapper[4828]: I1205 19:04:29.904579 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:29Z","lastTransitionTime":"2025-12-05T19:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.008291 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.008364 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.008390 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.008423 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.008443 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:30Z","lastTransitionTime":"2025-12-05T19:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.111868 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.111922 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.111934 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.111954 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.111973 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:30Z","lastTransitionTime":"2025-12-05T19:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.214970 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.215374 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.215560 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.215733 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.215969 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:30Z","lastTransitionTime":"2025-12-05T19:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.319488 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.319938 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.320092 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.320239 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.320376 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:30Z","lastTransitionTime":"2025-12-05T19:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.423321 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.423377 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.423389 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.423406 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.423460 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:30Z","lastTransitionTime":"2025-12-05T19:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.446164 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.446207 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:30 crc kubenswrapper[4828]: E1205 19:04:30.446260 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.446274 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.446223 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:30 crc kubenswrapper[4828]: E1205 19:04:30.446593 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:04:30 crc kubenswrapper[4828]: E1205 19:04:30.446687 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:04:30 crc kubenswrapper[4828]: E1205 19:04:30.446896 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.448002 4828 scope.go:117] "RemoveContainer" containerID="4ce2785deb49a5b09fd305b8c9c4aa4bc10baf68b1a3bc48fe2d07bc947b4771" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.525466 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.525761 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.525950 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.526072 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.526200 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:30Z","lastTransitionTime":"2025-12-05T19:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.629358 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.629688 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.629698 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.629715 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.629726 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:30Z","lastTransitionTime":"2025-12-05T19:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.732316 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.732376 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.732395 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.732442 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.732457 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:30Z","lastTransitionTime":"2025-12-05T19:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.805092 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tzshq_1be569ff-0725-412f-ac1a-da4f5077bc17/ovnkube-controller/1.log" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.808128 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" event={"ID":"1be569ff-0725-412f-ac1a-da4f5077bc17","Type":"ContainerStarted","Data":"734951387a480f4fea65d79f25b5f83da94782dba3a97f436f059f3f43255298"} Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.808607 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.829136 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:30Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.837052 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.837117 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.837131 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.837151 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.837166 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:30Z","lastTransitionTime":"2025-12-05T19:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.849007 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:30Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.864246 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:30Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.875704 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:30Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.887124 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:30Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.896364 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44935bbd-b8fe-44ed-93ac-86eed967e178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fcf44679b22e61763bd01648f92ef1e645b40edd18e5f5f1d577bdd75952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1717f608fb766a765e800128eb9bc99275b75519406b39bf88eb811ea1ea78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dthbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:30Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.913494 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:30Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.925559 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:30Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.936538 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:30Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.939115 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.939164 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.939182 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.939200 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.939211 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:30Z","lastTransitionTime":"2025-12-05T19:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.948895 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:30Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.963021 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:30Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.975231 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac03ea80-7eac-4147-99e2-7e71ce2d445d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5897df04b9f5ae0fe2d732c74d60c0e3c1c1aecf6fd21dbb3b43dd0f374b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a7d2eb47db1c4257460e84470c6aa096d27899281a73bce5247c7c3b259c183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9c0dadb3b4f125469c4dec525da5f9054191054b32cc0bc7a5b71fad50a494b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cb8e008d76531413d4ec31bfdf79f0fb87a654388f5189ac4e10d3c48d4bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cb8e008d76531413d4ec31bfdf79f0fb87a654388f5189ac4e10d3c48d4bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:30Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.988319 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:30Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:30 crc kubenswrapper[4828]: I1205 19:04:30.999099 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:30Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.016182 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:31Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.029143 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c81d9475b5e3b88e21ba70262d2c74c28a1907cf0241e00ce4eb57a70385e706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:31Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.041415 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.041528 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.041576 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.041594 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.041605 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:31Z","lastTransitionTime":"2025-12-05T19:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.046047 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://734951387a480f4fea65d79f25b5f83da94782dba3a97f436f059f3f43255298\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ce2785deb49a5b09fd305b8c9c4aa4bc10baf68b1a3bc48fe2d07bc947b4771\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"message\\\":\\\"le-plugin-85b44fc459-gdk6g\\\\nI1205 19:04:13.382483 6266 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-phlsx after 0 failed attempt(s)\\\\nI1205 19:04:13.382424 6266 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1205 19:04:13.382492 6266 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-phlsx\\\\nI1205 19:04:13.382495 6266 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1205 19:04:13.382443 6266 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-ksv4w after 0 failed attempt(s)\\\\nI1205 19:04:13.382506 6266 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-ksv4w\\\\nI1205 19:04:13.382506 6266 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF1205 19:04:13.382506 6266 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:31Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.055631 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvf6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0595333b-a181-4a2b-90b8-e2accf80e78e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvf6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:31Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.143048 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.143111 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.143124 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.143139 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.143149 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:31Z","lastTransitionTime":"2025-12-05T19:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.256311 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.256525 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.256596 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.256655 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.256718 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:31Z","lastTransitionTime":"2025-12-05T19:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.359145 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.359174 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.359182 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.359195 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.359203 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:31Z","lastTransitionTime":"2025-12-05T19:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.461597 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.461667 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.461682 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.461701 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.461712 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:31Z","lastTransitionTime":"2025-12-05T19:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.564708 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.564789 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.564812 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.564876 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.564901 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:31Z","lastTransitionTime":"2025-12-05T19:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.669178 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.669257 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.669280 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.669303 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.669323 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:31Z","lastTransitionTime":"2025-12-05T19:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.772348 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.772396 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.772406 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.772425 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.772437 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:31Z","lastTransitionTime":"2025-12-05T19:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.814677 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tzshq_1be569ff-0725-412f-ac1a-da4f5077bc17/ovnkube-controller/2.log" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.815721 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tzshq_1be569ff-0725-412f-ac1a-da4f5077bc17/ovnkube-controller/1.log" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.819930 4828 generic.go:334] "Generic (PLEG): container finished" podID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerID="734951387a480f4fea65d79f25b5f83da94782dba3a97f436f059f3f43255298" exitCode=1 Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.820000 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" event={"ID":"1be569ff-0725-412f-ac1a-da4f5077bc17","Type":"ContainerDied","Data":"734951387a480f4fea65d79f25b5f83da94782dba3a97f436f059f3f43255298"} Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.820067 4828 scope.go:117] "RemoveContainer" containerID="4ce2785deb49a5b09fd305b8c9c4aa4bc10baf68b1a3bc48fe2d07bc947b4771" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.820762 4828 scope.go:117] "RemoveContainer" containerID="734951387a480f4fea65d79f25b5f83da94782dba3a97f436f059f3f43255298" Dec 05 19:04:31 crc kubenswrapper[4828]: E1205 19:04:31.820975 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tzshq_openshift-ovn-kubernetes(1be569ff-0725-412f-ac1a-da4f5077bc17)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.852301 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://734951387a480f4fea65d79f25b5f83da94782dba3a97f436f059f3f43255298\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ce2785deb49a5b09fd305b8c9c4aa4bc10baf68b1a3bc48fe2d07bc947b4771\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"message\\\":\\\"le-plugin-85b44fc459-gdk6g\\\\nI1205 19:04:13.382483 6266 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-phlsx after 0 failed attempt(s)\\\\nI1205 19:04:13.382424 6266 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1205 19:04:13.382492 6266 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-phlsx\\\\nI1205 19:04:13.382495 6266 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1205 19:04:13.382443 6266 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-ksv4w after 0 failed attempt(s)\\\\nI1205 19:04:13.382506 6266 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-ksv4w\\\\nI1205 19:04:13.382506 6266 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF1205 19:04:13.382506 6266 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://734951387a480f4fea65d79f25b5f83da94782dba3a97f436f059f3f43255298\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:31Z\\\",\\\"message\\\":\\\"uring zone local for Pod openshift-multus/multus-ksv4w in node crc\\\\nI1205 19:04:31.357736 6485 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt\\\\nI1205 19:04:31.357704 6485 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1205 19:04:31.357745 6485 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1205 19:04:31.357695 6485 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI1205 19:04:31.357755 6485 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI1205 19:04:31.357759 6485 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nF1205 19:04:31.357761 6485 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:31Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.863842 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvf6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0595333b-a181-4a2b-90b8-e2accf80e78e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvf6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:31Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.876024 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.876070 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.876112 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.876134 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.876149 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:31Z","lastTransitionTime":"2025-12-05T19:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.877918 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:31Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.893385 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:31Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.904183 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:31Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.916742 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:31Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.932411 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:31Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.945750 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44935bbd-b8fe-44ed-93ac-86eed967e178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fcf44679b22e61763bd01648f92ef1e645b40edd18e5f5f1d577bdd75952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1717f608fb766a765e800128eb9bc99275b75519406b39bf88eb811ea1ea78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dthbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:31Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.975524 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:31Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.978281 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.978312 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.978323 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.978340 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.978351 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:31Z","lastTransitionTime":"2025-12-05T19:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:31 crc kubenswrapper[4828]: I1205 19:04:31.994486 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:31Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.009604 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.022990 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.039532 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.055556 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac03ea80-7eac-4147-99e2-7e71ce2d445d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5897df04b9f5ae0fe2d732c74d60c0e3c1c1aecf6fd21dbb3b43dd0f374b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a7d2eb47db1c4257460e84470c6aa096d27899281a73bce5247c7c3b259c183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9c0dadb3b4f125469c4dec525da5f9054191054b32cc0bc7a5b71fad50a494b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cb8e008d76531413d4ec31bfdf79f0fb87a654388f5189ac4e10d3c48d4bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cb8e008d76531413d4ec31bfdf79f0fb87a654388f5189ac4e10d3c48d4bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.071427 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.081181 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.081250 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.081272 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.081303 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.081324 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:32Z","lastTransitionTime":"2025-12-05T19:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.085361 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.100861 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.118890 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c81d9475b5e3b88e21ba70262d2c74c28a1907cf0241e00ce4eb57a70385e706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.184130 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.184180 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.184192 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.184210 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.184223 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:32Z","lastTransitionTime":"2025-12-05T19:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.248848 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:32 crc kubenswrapper[4828]: E1205 19:04:32.249039 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 19:04:32 crc kubenswrapper[4828]: E1205 19:04:32.249063 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 19:04:32 crc kubenswrapper[4828]: E1205 19:04:32.249076 4828 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:04:32 crc kubenswrapper[4828]: E1205 19:04:32.249133 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 19:05:04.249115744 +0000 UTC m=+82.144338050 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.286541 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.286602 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.286622 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.286643 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.286663 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:32Z","lastTransitionTime":"2025-12-05T19:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.349692 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.349874 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:32 crc kubenswrapper[4828]: E1205 19:04:32.349897 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:05:04.349874498 +0000 UTC m=+82.245096814 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.349960 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:32 crc kubenswrapper[4828]: E1205 19:04:32.350025 4828 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.350041 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:32 crc kubenswrapper[4828]: E1205 19:04:32.350097 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 19:05:04.350071014 +0000 UTC m=+82.245293360 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 19:04:32 crc kubenswrapper[4828]: E1205 19:04:32.350117 4828 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 19:04:32 crc kubenswrapper[4828]: E1205 19:04:32.350286 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 19:05:04.350275659 +0000 UTC m=+82.245497975 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 19:04:32 crc kubenswrapper[4828]: E1205 19:04:32.350192 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 19:04:32 crc kubenswrapper[4828]: E1205 19:04:32.350369 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 19:04:32 crc kubenswrapper[4828]: E1205 19:04:32.350396 4828 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:04:32 crc kubenswrapper[4828]: E1205 19:04:32.350477 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 19:05:04.350456624 +0000 UTC m=+82.245678950 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.392423 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.392493 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.392513 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.392539 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.392564 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:32Z","lastTransitionTime":"2025-12-05T19:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.445485 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.445524 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.445551 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.445589 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:32 crc kubenswrapper[4828]: E1205 19:04:32.445674 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:04:32 crc kubenswrapper[4828]: E1205 19:04:32.445764 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:04:32 crc kubenswrapper[4828]: E1205 19:04:32.445908 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:04:32 crc kubenswrapper[4828]: E1205 19:04:32.446053 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.473459 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.489428 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.497065 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.497108 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.497123 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.497139 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.497153 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:32Z","lastTransitionTime":"2025-12-05T19:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.508732 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.528104 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.540667 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.553976 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac03ea80-7eac-4147-99e2-7e71ce2d445d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5897df04b9f5ae0fe2d732c74d60c0e3c1c1aecf6fd21dbb3b43dd0f374b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a7d2eb47db1c4257460e84470c6aa096d27899281a73bce5247c7c3b259c183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9c0dadb3b4f125469c4dec525da5f9054191054b32cc0bc7a5b71fad50a494b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cb8e008d76531413d4ec31bfdf79f0fb87a654388f5189ac4e10d3c48d4bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cb8e008d76531413d4ec31bfdf79f0fb87a654388f5189ac4e10d3c48d4bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.566741 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.578717 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.594394 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.599162 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.599211 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.599229 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.599252 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.599270 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:32Z","lastTransitionTime":"2025-12-05T19:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.614089 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c81d9475b5e3b88e21ba70262d2c74c28a1907cf0241e00ce4eb57a70385e706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.637686 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://734951387a480f4fea65d79f25b5f83da94782dba3a97f436f059f3f43255298\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ce2785deb49a5b09fd305b8c9c4aa4bc10baf68b1a3bc48fe2d07bc947b4771\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"message\\\":\\\"le-plugin-85b44fc459-gdk6g\\\\nI1205 19:04:13.382483 6266 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-phlsx after 0 failed attempt(s)\\\\nI1205 19:04:13.382424 6266 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1205 19:04:13.382492 6266 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-phlsx\\\\nI1205 19:04:13.382495 6266 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1205 19:04:13.382443 6266 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-ksv4w after 0 failed attempt(s)\\\\nI1205 19:04:13.382506 6266 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-ksv4w\\\\nI1205 19:04:13.382506 6266 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF1205 19:04:13.382506 6266 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://734951387a480f4fea65d79f25b5f83da94782dba3a97f436f059f3f43255298\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:31Z\\\",\\\"message\\\":\\\"uring zone local for Pod openshift-multus/multus-ksv4w in node crc\\\\nI1205 19:04:31.357736 6485 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt\\\\nI1205 19:04:31.357704 6485 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1205 19:04:31.357745 6485 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1205 19:04:31.357695 6485 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI1205 19:04:31.357755 6485 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI1205 19:04:31.357759 6485 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nF1205 19:04:31.357761 6485 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.660247 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvf6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0595333b-a181-4a2b-90b8-e2accf80e78e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvf6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.690453 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.701634 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.701664 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.701675 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.701690 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.701702 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:32Z","lastTransitionTime":"2025-12-05T19:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.709851 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.718708 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.727412 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.740801 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.750728 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44935bbd-b8fe-44ed-93ac-86eed967e178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fcf44679b22e61763bd01648f92ef1e645b40edd18e5f5f1d577bdd75952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1717f608fb766a765e800128eb9bc99275b75519406b39bf88eb811ea1ea78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dthbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.754041 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0595333b-a181-4a2b-90b8-e2accf80e78e-metrics-certs\") pod \"network-metrics-daemon-bvf6n\" (UID: \"0595333b-a181-4a2b-90b8-e2accf80e78e\") " pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:32 crc kubenswrapper[4828]: E1205 19:04:32.754148 4828 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 19:04:32 crc kubenswrapper[4828]: E1205 19:04:32.754214 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0595333b-a181-4a2b-90b8-e2accf80e78e-metrics-certs podName:0595333b-a181-4a2b-90b8-e2accf80e78e nodeName:}" failed. No retries permitted until 2025-12-05 19:04:48.754195173 +0000 UTC m=+66.649417479 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0595333b-a181-4a2b-90b8-e2accf80e78e-metrics-certs") pod "network-metrics-daemon-bvf6n" (UID: "0595333b-a181-4a2b-90b8-e2accf80e78e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.803745 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.803782 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.803792 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.803806 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.803816 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:32Z","lastTransitionTime":"2025-12-05T19:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.824480 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tzshq_1be569ff-0725-412f-ac1a-da4f5077bc17/ovnkube-controller/2.log" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.827335 4828 scope.go:117] "RemoveContainer" containerID="734951387a480f4fea65d79f25b5f83da94782dba3a97f436f059f3f43255298" Dec 05 19:04:32 crc kubenswrapper[4828]: E1205 19:04:32.827497 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tzshq_openshift-ovn-kubernetes(1be569ff-0725-412f-ac1a-da4f5077bc17)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.845565 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://734951387a480f4fea65d79f25b5f83da94782dba3a97f436f059f3f43255298\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://734951387a480f4fea65d79f25b5f83da94782dba3a97f436f059f3f43255298\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:31Z\\\",\\\"message\\\":\\\"uring zone local for Pod openshift-multus/multus-ksv4w in node crc\\\\nI1205 19:04:31.357736 6485 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt\\\\nI1205 19:04:31.357704 6485 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1205 19:04:31.357745 6485 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1205 19:04:31.357695 6485 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI1205 19:04:31.357755 6485 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI1205 19:04:31.357759 6485 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nF1205 19:04:31.357761 6485 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tzshq_openshift-ovn-kubernetes(1be569ff-0725-412f-ac1a-da4f5077bc17)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.856925 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvf6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0595333b-a181-4a2b-90b8-e2accf80e78e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvf6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.867729 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.882416 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.893212 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.903991 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.905334 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.905366 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.905378 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.905394 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.905406 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:32Z","lastTransitionTime":"2025-12-05T19:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.915272 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.926915 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44935bbd-b8fe-44ed-93ac-86eed967e178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fcf44679b22e61763bd01648f92ef1e645b40edd18e5f5f1d577bdd75952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1717f608fb766a765e800128eb9bc99275b75519406b39bf88eb811ea1ea78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dthbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.945027 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.959585 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.975020 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:32 crc kubenswrapper[4828]: I1205 19:04:32.986887 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.000735 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:32Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.007379 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.007414 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.007425 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.007441 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.007451 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:33Z","lastTransitionTime":"2025-12-05T19:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.014298 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac03ea80-7eac-4147-99e2-7e71ce2d445d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5897df04b9f5ae0fe2d732c74d60c0e3c1c1aecf6fd21dbb3b43dd0f374b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a7d2eb47db1c4257460e84470c6aa096d27899281a73bce5247c7c3b259c183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9c0dadb3b4f125469c4dec525da5f9054191054b32cc0bc7a5b71fad50a494b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cb8e008d76531413d4ec31bfdf79f0fb87a654388f5189ac4e10d3c48d4bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cb8e008d76531413d4ec31bfdf79f0fb87a654388f5189ac4e10d3c48d4bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:33Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.028135 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:33Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.043485 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:33Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.056073 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:33Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.071260 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c81d9475b5e3b88e21ba70262d2c74c28a1907cf0241e00ce4eb57a70385e706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:33Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.109997 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.110034 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.110047 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.110060 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.110071 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:33Z","lastTransitionTime":"2025-12-05T19:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.212554 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.212583 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.212594 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.212610 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.212622 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:33Z","lastTransitionTime":"2025-12-05T19:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.314451 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.314509 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.314519 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.314534 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.314545 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:33Z","lastTransitionTime":"2025-12-05T19:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.416991 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.417038 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.417054 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.417078 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.417095 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:33Z","lastTransitionTime":"2025-12-05T19:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.520324 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.520390 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.520408 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.520432 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.520449 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:33Z","lastTransitionTime":"2025-12-05T19:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.623492 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.623746 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.623871 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.623960 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.624038 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:33Z","lastTransitionTime":"2025-12-05T19:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.727204 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.727262 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.727279 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.727303 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.727320 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:33Z","lastTransitionTime":"2025-12-05T19:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.830689 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.830732 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.830743 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.830759 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.830771 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:33Z","lastTransitionTime":"2025-12-05T19:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.934257 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.934711 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.934937 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.935130 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:33 crc kubenswrapper[4828]: I1205 19:04:33.935305 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:33Z","lastTransitionTime":"2025-12-05T19:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.039060 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.039106 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.039117 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.039132 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.039142 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:34Z","lastTransitionTime":"2025-12-05T19:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.142089 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.142131 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.142139 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.142155 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.142166 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:34Z","lastTransitionTime":"2025-12-05T19:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.244597 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.244662 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.244683 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.244709 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.244730 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:34Z","lastTransitionTime":"2025-12-05T19:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.347729 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.347785 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.347802 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.347872 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.347893 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:34Z","lastTransitionTime":"2025-12-05T19:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.373505 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.373564 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.373581 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.373607 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.373625 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:34Z","lastTransitionTime":"2025-12-05T19:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:34 crc kubenswrapper[4828]: E1205 19:04:34.395203 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:34Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.400653 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.400717 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.400737 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.400761 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.400780 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:34Z","lastTransitionTime":"2025-12-05T19:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:34 crc kubenswrapper[4828]: E1205 19:04:34.421118 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:34Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.427306 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.427365 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.427388 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.427416 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.427437 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:34Z","lastTransitionTime":"2025-12-05T19:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.446276 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.446298 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.446444 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:34 crc kubenswrapper[4828]: E1205 19:04:34.446643 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.446690 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:34 crc kubenswrapper[4828]: E1205 19:04:34.446875 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:04:34 crc kubenswrapper[4828]: E1205 19:04:34.447051 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:04:34 crc kubenswrapper[4828]: E1205 19:04:34.447243 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:04:34 crc kubenswrapper[4828]: E1205 19:04:34.447441 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:34Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.452478 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.452540 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.452563 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.452593 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.452614 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:34Z","lastTransitionTime":"2025-12-05T19:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:34 crc kubenswrapper[4828]: E1205 19:04:34.475727 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:34Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.480886 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.480946 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.480969 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.481000 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.481023 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:34Z","lastTransitionTime":"2025-12-05T19:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:34 crc kubenswrapper[4828]: E1205 19:04:34.502097 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:34Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:34 crc kubenswrapper[4828]: E1205 19:04:34.502425 4828 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.504669 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.504735 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.504757 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.504787 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.504810 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:34Z","lastTransitionTime":"2025-12-05T19:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.607683 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.607731 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.607743 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.607761 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.607774 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:34Z","lastTransitionTime":"2025-12-05T19:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.711328 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.711602 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.711668 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.711745 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.711807 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:34Z","lastTransitionTime":"2025-12-05T19:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.815409 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.815453 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.815464 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.815484 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.815496 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:34Z","lastTransitionTime":"2025-12-05T19:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.917438 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.917510 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.917521 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.917540 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:34 crc kubenswrapper[4828]: I1205 19:04:34.917551 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:34Z","lastTransitionTime":"2025-12-05T19:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.020150 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.020222 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.020239 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.020261 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.020279 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:35Z","lastTransitionTime":"2025-12-05T19:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.123613 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.123679 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.123706 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.123723 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.123733 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:35Z","lastTransitionTime":"2025-12-05T19:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.226520 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.226573 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.226585 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.226604 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.226616 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:35Z","lastTransitionTime":"2025-12-05T19:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.330118 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.330197 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.330220 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.330253 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.330279 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:35Z","lastTransitionTime":"2025-12-05T19:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.434171 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.434262 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.434281 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.434343 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.434367 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:35Z","lastTransitionTime":"2025-12-05T19:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.537609 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.537678 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.537717 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.537748 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.537769 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:35Z","lastTransitionTime":"2025-12-05T19:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.640517 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.640590 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.640620 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.640655 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.640679 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:35Z","lastTransitionTime":"2025-12-05T19:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.743798 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.743863 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.743878 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.743895 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.743904 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:35Z","lastTransitionTime":"2025-12-05T19:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.850559 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.850609 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.850635 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.850657 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.850675 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:35Z","lastTransitionTime":"2025-12-05T19:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.953554 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.953613 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.953626 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.953643 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:35 crc kubenswrapper[4828]: I1205 19:04:35.953656 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:35Z","lastTransitionTime":"2025-12-05T19:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.056572 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.056650 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.056678 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.056708 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.056729 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:36Z","lastTransitionTime":"2025-12-05T19:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.159518 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.159558 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.159567 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.159581 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.159592 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:36Z","lastTransitionTime":"2025-12-05T19:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.262623 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.262669 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.262677 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.262692 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.262703 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:36Z","lastTransitionTime":"2025-12-05T19:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.365725 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.365769 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.365780 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.365796 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.365810 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:36Z","lastTransitionTime":"2025-12-05T19:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.446184 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.446244 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.446244 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:36 crc kubenswrapper[4828]: E1205 19:04:36.446392 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.446429 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:36 crc kubenswrapper[4828]: E1205 19:04:36.446621 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:04:36 crc kubenswrapper[4828]: E1205 19:04:36.446753 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:04:36 crc kubenswrapper[4828]: E1205 19:04:36.446808 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.468098 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.468147 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.468159 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.468176 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.468189 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:36Z","lastTransitionTime":"2025-12-05T19:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.570480 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.570528 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.570539 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.570555 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.570567 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:36Z","lastTransitionTime":"2025-12-05T19:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.673199 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.673262 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.673273 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.673290 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.673301 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:36Z","lastTransitionTime":"2025-12-05T19:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.775645 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.775713 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.775731 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.775756 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.775774 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:36Z","lastTransitionTime":"2025-12-05T19:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.879127 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.879224 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.879251 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.879285 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.879309 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:36Z","lastTransitionTime":"2025-12-05T19:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.982112 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.982169 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.982185 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.982205 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:36 crc kubenswrapper[4828]: I1205 19:04:36.982220 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:36Z","lastTransitionTime":"2025-12-05T19:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.084109 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.084153 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.084162 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.084175 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.084184 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:37Z","lastTransitionTime":"2025-12-05T19:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.187679 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.187751 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.187770 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.187798 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.187815 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:37Z","lastTransitionTime":"2025-12-05T19:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.291349 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.291417 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.291956 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.292012 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.292035 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:37Z","lastTransitionTime":"2025-12-05T19:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.395500 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.395568 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.395586 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.395610 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.395634 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:37Z","lastTransitionTime":"2025-12-05T19:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.497934 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.497997 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.498012 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.498033 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.498049 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:37Z","lastTransitionTime":"2025-12-05T19:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.601279 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.601320 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.601331 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.601347 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.601359 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:37Z","lastTransitionTime":"2025-12-05T19:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.704178 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.704266 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.704296 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.704328 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.704351 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:37Z","lastTransitionTime":"2025-12-05T19:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.806469 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.806523 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.806535 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.806553 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.806565 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:37Z","lastTransitionTime":"2025-12-05T19:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.909161 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.909213 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.909230 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.909252 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:37 crc kubenswrapper[4828]: I1205 19:04:37.909269 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:37Z","lastTransitionTime":"2025-12-05T19:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.012025 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.012094 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.012112 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.012135 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.012153 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:38Z","lastTransitionTime":"2025-12-05T19:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.114285 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.114372 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.114396 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.114429 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.114457 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:38Z","lastTransitionTime":"2025-12-05T19:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.217226 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.217311 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.217322 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.217370 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.217384 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:38Z","lastTransitionTime":"2025-12-05T19:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.323948 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.324033 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.324046 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.324078 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.324090 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:38Z","lastTransitionTime":"2025-12-05T19:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.426692 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.426757 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.426774 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.426798 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.426816 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:38Z","lastTransitionTime":"2025-12-05T19:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.446001 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.446062 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:38 crc kubenswrapper[4828]: E1205 19:04:38.446146 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.446168 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.446228 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:38 crc kubenswrapper[4828]: E1205 19:04:38.446334 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:04:38 crc kubenswrapper[4828]: E1205 19:04:38.446374 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:04:38 crc kubenswrapper[4828]: E1205 19:04:38.446429 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.530609 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.530672 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.530683 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.530700 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.530711 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:38Z","lastTransitionTime":"2025-12-05T19:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.633324 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.633378 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.633389 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.633409 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.633420 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:38Z","lastTransitionTime":"2025-12-05T19:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.735502 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.735537 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.735545 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.735561 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.735588 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:38Z","lastTransitionTime":"2025-12-05T19:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.839085 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.839138 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.839150 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.839168 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.839178 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:38Z","lastTransitionTime":"2025-12-05T19:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.942439 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.942470 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.942478 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.942491 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:38 crc kubenswrapper[4828]: I1205 19:04:38.942501 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:38Z","lastTransitionTime":"2025-12-05T19:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.046300 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.046382 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.046405 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.046429 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.046445 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:39Z","lastTransitionTime":"2025-12-05T19:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.149393 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.149432 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.149444 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.149460 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.149472 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:39Z","lastTransitionTime":"2025-12-05T19:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.252084 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.252950 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.253105 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.253242 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.253373 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:39Z","lastTransitionTime":"2025-12-05T19:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.356771 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.356804 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.356814 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.356844 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.356855 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:39Z","lastTransitionTime":"2025-12-05T19:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.458751 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.458786 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.458797 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.458861 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.458872 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:39Z","lastTransitionTime":"2025-12-05T19:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.561732 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.561791 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.561800 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.561814 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.561858 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:39Z","lastTransitionTime":"2025-12-05T19:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.663811 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.663907 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.663927 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.663952 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.663971 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:39Z","lastTransitionTime":"2025-12-05T19:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.766223 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.766262 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.766275 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.766289 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.766301 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:39Z","lastTransitionTime":"2025-12-05T19:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.869164 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.869187 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.869195 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.869207 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.869215 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:39Z","lastTransitionTime":"2025-12-05T19:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.972734 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.972814 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.972900 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.972956 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:39 crc kubenswrapper[4828]: I1205 19:04:39.972980 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:39Z","lastTransitionTime":"2025-12-05T19:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.075991 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.076052 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.076074 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.076103 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.076129 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:40Z","lastTransitionTime":"2025-12-05T19:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.179352 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.179393 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.179404 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.179422 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.179655 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:40Z","lastTransitionTime":"2025-12-05T19:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.282613 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.282671 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.282689 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.282713 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.282731 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:40Z","lastTransitionTime":"2025-12-05T19:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.385331 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.385377 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.385390 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.385407 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.385419 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:40Z","lastTransitionTime":"2025-12-05T19:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.446097 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.446126 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.446176 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:40 crc kubenswrapper[4828]: E1205 19:04:40.446322 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.446393 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:40 crc kubenswrapper[4828]: E1205 19:04:40.446521 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:04:40 crc kubenswrapper[4828]: E1205 19:04:40.446580 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:04:40 crc kubenswrapper[4828]: E1205 19:04:40.446700 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.487508 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.487561 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.487577 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.487599 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.487614 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:40Z","lastTransitionTime":"2025-12-05T19:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.590650 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.590748 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.590763 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.590805 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.590848 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:40Z","lastTransitionTime":"2025-12-05T19:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.694598 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.694673 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.694698 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.694726 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.694749 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:40Z","lastTransitionTime":"2025-12-05T19:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.797649 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.797687 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.797697 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.797712 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.797723 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:40Z","lastTransitionTime":"2025-12-05T19:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.900101 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.900155 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.900166 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.900182 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:40 crc kubenswrapper[4828]: I1205 19:04:40.900194 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:40Z","lastTransitionTime":"2025-12-05T19:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.002756 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.002809 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.002853 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.002872 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.002883 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:41Z","lastTransitionTime":"2025-12-05T19:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.105086 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.105125 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.105133 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.105146 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.105155 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:41Z","lastTransitionTime":"2025-12-05T19:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.207729 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.207785 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.207801 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.207852 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.207871 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:41Z","lastTransitionTime":"2025-12-05T19:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.310586 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.310657 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.310682 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.310712 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.310736 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:41Z","lastTransitionTime":"2025-12-05T19:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.413378 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.413428 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.413444 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.413465 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.413480 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:41Z","lastTransitionTime":"2025-12-05T19:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.516569 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.516651 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.516673 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.516701 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.516719 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:41Z","lastTransitionTime":"2025-12-05T19:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.619860 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.619927 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.619943 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.619968 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.619987 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:41Z","lastTransitionTime":"2025-12-05T19:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.722392 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.722446 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.722763 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.722786 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.722802 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:41Z","lastTransitionTime":"2025-12-05T19:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.825533 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.825604 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.825618 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.825639 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.825656 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:41Z","lastTransitionTime":"2025-12-05T19:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.928216 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.928276 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.928293 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.928314 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:41 crc kubenswrapper[4828]: I1205 19:04:41.928330 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:41Z","lastTransitionTime":"2025-12-05T19:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.031382 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.031439 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.031455 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.031478 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.031494 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:42Z","lastTransitionTime":"2025-12-05T19:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.134338 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.134415 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.134432 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.134458 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.134477 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:42Z","lastTransitionTime":"2025-12-05T19:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.236618 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.236657 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.236668 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.236684 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.236694 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:42Z","lastTransitionTime":"2025-12-05T19:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.339382 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.339460 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.339481 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.339509 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.339532 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:42Z","lastTransitionTime":"2025-12-05T19:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.442446 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.442524 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.442538 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.442556 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.442567 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:42Z","lastTransitionTime":"2025-12-05T19:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.445502 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.445545 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.445678 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:42 crc kubenswrapper[4828]: E1205 19:04:42.446247 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:04:42 crc kubenswrapper[4828]: E1205 19:04:42.446061 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:04:42 crc kubenswrapper[4828]: E1205 19:04:42.446387 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.445686 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:42 crc kubenswrapper[4828]: E1205 19:04:42.446650 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.479387 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://734951387a480f4fea65d79f25b5f83da94782dba3a97f436f059f3f43255298\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://734951387a480f4fea65d79f25b5f83da94782dba3a97f436f059f3f43255298\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:31Z\\\",\\\"message\\\":\\\"uring zone local for Pod openshift-multus/multus-ksv4w in node crc\\\\nI1205 19:04:31.357736 6485 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt\\\\nI1205 19:04:31.357704 6485 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1205 19:04:31.357745 6485 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1205 19:04:31.357695 6485 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI1205 19:04:31.357755 6485 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI1205 19:04:31.357759 6485 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nF1205 19:04:31.357761 6485 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tzshq_openshift-ovn-kubernetes(1be569ff-0725-412f-ac1a-da4f5077bc17)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:42Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.490901 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvf6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0595333b-a181-4a2b-90b8-e2accf80e78e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvf6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:42Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.502414 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:42Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.517428 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:42Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.530785 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44935bbd-b8fe-44ed-93ac-86eed967e178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fcf44679b22e61763bd01648f92ef1e645b40edd18e5f5f1d577bdd75952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1717f608fb766a765e800128eb9bc99275b75519406b39bf88eb811ea1ea78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dthbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:42Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.545092 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.545157 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.545176 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.545200 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.545217 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:42Z","lastTransitionTime":"2025-12-05T19:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.545622 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:42Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.561169 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:42Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.572340 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:42Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.583852 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:42Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.596637 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:42Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.617236 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:42Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.630775 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:42Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.643059 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:42Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.647228 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.647277 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.647322 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.647343 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.647360 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:42Z","lastTransitionTime":"2025-12-05T19:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.653685 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:42Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.667401 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c81d9475b5e3b88e21ba70262d2c74c28a1907cf0241e00ce4eb57a70385e706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:42Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.679509 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac03ea80-7eac-4147-99e2-7e71ce2d445d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5897df04b9f5ae0fe2d732c74d60c0e3c1c1aecf6fd21dbb3b43dd0f374b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a7d2eb47db1c4257460e84470c6aa096d27899281a73bce5247c7c3b259c183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9c0dadb3b4f125469c4dec525da5f9054191054b32cc0bc7a5b71fad50a494b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cb8e008d76531413d4ec31bfdf79f0fb87a654388f5189ac4e10d3c48d4bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cb8e008d76531413d4ec31bfdf79f0fb87a654388f5189ac4e10d3c48d4bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:42Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.692376 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:42Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.703445 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:42Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.749203 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.749241 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.749249 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.749263 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.749271 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:42Z","lastTransitionTime":"2025-12-05T19:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.851485 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.851549 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.851560 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.851572 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.851580 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:42Z","lastTransitionTime":"2025-12-05T19:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.953703 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.953756 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.953764 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.953779 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:42 crc kubenswrapper[4828]: I1205 19:04:42.953788 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:42Z","lastTransitionTime":"2025-12-05T19:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.056163 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.056229 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.056246 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.056270 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.056289 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:43Z","lastTransitionTime":"2025-12-05T19:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.159035 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.159104 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.159121 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.159149 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.159169 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:43Z","lastTransitionTime":"2025-12-05T19:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.261939 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.261988 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.262005 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.262024 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.262038 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:43Z","lastTransitionTime":"2025-12-05T19:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.364334 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.364882 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.364963 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.365092 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.365202 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:43Z","lastTransitionTime":"2025-12-05T19:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.468194 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.468265 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.468287 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.468312 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.468347 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:43Z","lastTransitionTime":"2025-12-05T19:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.570786 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.571310 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.571396 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.571480 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.571563 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:43Z","lastTransitionTime":"2025-12-05T19:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.674077 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.674142 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.674160 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.674185 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.674202 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:43Z","lastTransitionTime":"2025-12-05T19:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.777811 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.778094 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.778160 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.778225 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.778286 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:43Z","lastTransitionTime":"2025-12-05T19:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.884048 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.884206 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.884222 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.884239 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.884248 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:43Z","lastTransitionTime":"2025-12-05T19:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.987806 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.987861 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.987904 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.987920 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:43 crc kubenswrapper[4828]: I1205 19:04:43.987931 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:43Z","lastTransitionTime":"2025-12-05T19:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.091175 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.091228 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.091247 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.091272 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.091291 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:44Z","lastTransitionTime":"2025-12-05T19:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.194352 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.194420 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.194441 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.194469 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.194491 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:44Z","lastTransitionTime":"2025-12-05T19:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.297804 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.297908 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.297924 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.297948 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.297966 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:44Z","lastTransitionTime":"2025-12-05T19:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.401237 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.401287 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.401304 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.401320 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.401329 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:44Z","lastTransitionTime":"2025-12-05T19:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.446365 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.446931 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.446979 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:44 crc kubenswrapper[4828]: E1205 19:04:44.447068 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.447353 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:44 crc kubenswrapper[4828]: E1205 19:04:44.447433 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.447441 4828 scope.go:117] "RemoveContainer" containerID="734951387a480f4fea65d79f25b5f83da94782dba3a97f436f059f3f43255298" Dec 05 19:04:44 crc kubenswrapper[4828]: E1205 19:04:44.447595 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:04:44 crc kubenswrapper[4828]: E1205 19:04:44.447728 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:04:44 crc kubenswrapper[4828]: E1205 19:04:44.447849 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tzshq_openshift-ovn-kubernetes(1be569ff-0725-412f-ac1a-da4f5077bc17)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.503783 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.503928 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.503960 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.503993 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.504017 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:44Z","lastTransitionTime":"2025-12-05T19:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.607906 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.607973 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.607984 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.608002 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.608013 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:44Z","lastTransitionTime":"2025-12-05T19:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.716153 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.716213 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.716231 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.716294 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.716312 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:44Z","lastTransitionTime":"2025-12-05T19:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.725535 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.725585 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.725594 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.725604 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.725613 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:44Z","lastTransitionTime":"2025-12-05T19:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:44 crc kubenswrapper[4828]: E1205 19:04:44.745801 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:44Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.749539 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.749585 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.749596 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.749614 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.749627 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:44Z","lastTransitionTime":"2025-12-05T19:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:44 crc kubenswrapper[4828]: E1205 19:04:44.765289 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:44Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.769103 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.769147 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.769159 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.769176 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.769188 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:44Z","lastTransitionTime":"2025-12-05T19:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:44 crc kubenswrapper[4828]: E1205 19:04:44.783281 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:44Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.788100 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.788154 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.788176 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.788202 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.788221 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:44Z","lastTransitionTime":"2025-12-05T19:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:44 crc kubenswrapper[4828]: E1205 19:04:44.808997 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:44Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.814354 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.814423 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.814448 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.814477 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.814500 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:44Z","lastTransitionTime":"2025-12-05T19:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:44 crc kubenswrapper[4828]: E1205 19:04:44.829517 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:44Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:44 crc kubenswrapper[4828]: E1205 19:04:44.829654 4828 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.831070 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.831130 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.831147 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.831178 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.831195 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:44Z","lastTransitionTime":"2025-12-05T19:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.933673 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.933708 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.933717 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.933730 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:44 crc kubenswrapper[4828]: I1205 19:04:44.933739 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:44Z","lastTransitionTime":"2025-12-05T19:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.036199 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.036258 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.036281 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.036303 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.036319 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:45Z","lastTransitionTime":"2025-12-05T19:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.139361 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.139403 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.139413 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.139429 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.139441 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:45Z","lastTransitionTime":"2025-12-05T19:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.242287 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.242359 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.242377 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.242399 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.242417 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:45Z","lastTransitionTime":"2025-12-05T19:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.345010 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.345046 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.345057 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.345073 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.345085 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:45Z","lastTransitionTime":"2025-12-05T19:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.447625 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.447696 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.447720 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.447748 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.447769 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:45Z","lastTransitionTime":"2025-12-05T19:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.550753 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.550807 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.551175 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.551205 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.551218 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:45Z","lastTransitionTime":"2025-12-05T19:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.654042 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.654120 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.654142 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.654168 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.654186 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:45Z","lastTransitionTime":"2025-12-05T19:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.757384 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.757431 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.757442 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.757462 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.757474 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:45Z","lastTransitionTime":"2025-12-05T19:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.859890 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.859933 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.859943 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.859959 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.859969 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:45Z","lastTransitionTime":"2025-12-05T19:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.962863 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.962901 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.962913 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.962929 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:45 crc kubenswrapper[4828]: I1205 19:04:45.962940 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:45Z","lastTransitionTime":"2025-12-05T19:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.065329 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.065371 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.065380 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.065395 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.065404 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:46Z","lastTransitionTime":"2025-12-05T19:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.168142 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.168189 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.168199 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.168216 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.168227 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:46Z","lastTransitionTime":"2025-12-05T19:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.271047 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.271103 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.271115 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.271132 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.271142 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:46Z","lastTransitionTime":"2025-12-05T19:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.376949 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.377027 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.377044 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.377069 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.377089 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:46Z","lastTransitionTime":"2025-12-05T19:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.445858 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.445956 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.446017 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:46 crc kubenswrapper[4828]: E1205 19:04:46.446067 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.446112 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:46 crc kubenswrapper[4828]: E1205 19:04:46.446169 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:04:46 crc kubenswrapper[4828]: E1205 19:04:46.446282 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:04:46 crc kubenswrapper[4828]: E1205 19:04:46.446372 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.479682 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.479714 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.479726 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.479743 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.479754 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:46Z","lastTransitionTime":"2025-12-05T19:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.583266 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.583327 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.583391 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.583419 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.583443 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:46Z","lastTransitionTime":"2025-12-05T19:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.685623 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.685651 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.685659 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.685672 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.685681 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:46Z","lastTransitionTime":"2025-12-05T19:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.788762 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.788816 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.788846 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.788866 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.788879 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:46Z","lastTransitionTime":"2025-12-05T19:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.890704 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.890881 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.890915 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.890939 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.890957 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:46Z","lastTransitionTime":"2025-12-05T19:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.993377 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.993455 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.993481 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.993511 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:46 crc kubenswrapper[4828]: I1205 19:04:46.993532 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:46Z","lastTransitionTime":"2025-12-05T19:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.095668 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.095722 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.095739 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.095762 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.095778 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:47Z","lastTransitionTime":"2025-12-05T19:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.197998 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.198039 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.198051 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.198067 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.198079 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:47Z","lastTransitionTime":"2025-12-05T19:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.300540 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.300582 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.300591 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.300609 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.300620 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:47Z","lastTransitionTime":"2025-12-05T19:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.403679 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.403748 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.403760 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.403776 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.403786 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:47Z","lastTransitionTime":"2025-12-05T19:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.506068 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.506115 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.506126 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.506143 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.506157 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:47Z","lastTransitionTime":"2025-12-05T19:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.609017 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.609057 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.609069 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.609084 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.609095 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:47Z","lastTransitionTime":"2025-12-05T19:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.723147 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.723196 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.723207 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.723223 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.723234 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:47Z","lastTransitionTime":"2025-12-05T19:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.825261 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.825301 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.825313 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.825355 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.825369 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:47Z","lastTransitionTime":"2025-12-05T19:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.927969 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.928043 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.928058 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.928075 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:47 crc kubenswrapper[4828]: I1205 19:04:47.928417 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:47Z","lastTransitionTime":"2025-12-05T19:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.031325 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.031387 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.031410 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.031439 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.031459 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:48Z","lastTransitionTime":"2025-12-05T19:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.135268 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.135349 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.135358 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.135372 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.135391 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:48Z","lastTransitionTime":"2025-12-05T19:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.237767 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.237858 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.237870 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.237888 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.237916 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:48Z","lastTransitionTime":"2025-12-05T19:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.340548 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.340592 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.340603 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.340618 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.340630 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:48Z","lastTransitionTime":"2025-12-05T19:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.443755 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.443798 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.443807 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.443837 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.443846 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:48Z","lastTransitionTime":"2025-12-05T19:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.446122 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:48 crc kubenswrapper[4828]: E1205 19:04:48.446223 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.446262 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.446282 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.446374 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:48 crc kubenswrapper[4828]: E1205 19:04:48.446426 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:04:48 crc kubenswrapper[4828]: E1205 19:04:48.446480 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:04:48 crc kubenswrapper[4828]: E1205 19:04:48.446537 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.546844 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.546896 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.546925 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.546944 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.546955 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:48Z","lastTransitionTime":"2025-12-05T19:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.649632 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.649682 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.649692 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.649708 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.649719 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:48Z","lastTransitionTime":"2025-12-05T19:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.752213 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.752268 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.752281 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.752297 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.752310 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:48Z","lastTransitionTime":"2025-12-05T19:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.830935 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0595333b-a181-4a2b-90b8-e2accf80e78e-metrics-certs\") pod \"network-metrics-daemon-bvf6n\" (UID: \"0595333b-a181-4a2b-90b8-e2accf80e78e\") " pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:48 crc kubenswrapper[4828]: E1205 19:04:48.831106 4828 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 19:04:48 crc kubenswrapper[4828]: E1205 19:04:48.831202 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0595333b-a181-4a2b-90b8-e2accf80e78e-metrics-certs podName:0595333b-a181-4a2b-90b8-e2accf80e78e nodeName:}" failed. No retries permitted until 2025-12-05 19:05:20.831179058 +0000 UTC m=+98.726401444 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0595333b-a181-4a2b-90b8-e2accf80e78e-metrics-certs") pod "network-metrics-daemon-bvf6n" (UID: "0595333b-a181-4a2b-90b8-e2accf80e78e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.855472 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.855544 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.855562 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.855586 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.855612 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:48Z","lastTransitionTime":"2025-12-05T19:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.958302 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.958420 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.958431 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.958467 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:48 crc kubenswrapper[4828]: I1205 19:04:48.958477 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:48Z","lastTransitionTime":"2025-12-05T19:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.060761 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.060797 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.060808 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.060845 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.060859 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:49Z","lastTransitionTime":"2025-12-05T19:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.162936 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.162990 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.163017 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.163031 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.163040 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:49Z","lastTransitionTime":"2025-12-05T19:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.265120 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.265167 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.265178 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.265195 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.265206 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:49Z","lastTransitionTime":"2025-12-05T19:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.367812 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.367907 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.367946 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.367976 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.367998 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:49Z","lastTransitionTime":"2025-12-05T19:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.470593 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.470636 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.470651 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.470670 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.470686 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:49Z","lastTransitionTime":"2025-12-05T19:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.572502 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.572545 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.572558 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.572573 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.572585 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:49Z","lastTransitionTime":"2025-12-05T19:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.675332 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.675380 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.675392 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.675413 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.675426 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:49Z","lastTransitionTime":"2025-12-05T19:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.777845 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.777885 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.777895 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.777908 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.777919 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:49Z","lastTransitionTime":"2025-12-05T19:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.879426 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.879464 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.879475 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.879490 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.879499 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:49Z","lastTransitionTime":"2025-12-05T19:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.981920 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.981959 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.981969 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.981983 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:49 crc kubenswrapper[4828]: I1205 19:04:49.981992 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:49Z","lastTransitionTime":"2025-12-05T19:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.084357 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.084401 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.084409 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.084423 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.084432 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:50Z","lastTransitionTime":"2025-12-05T19:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.186794 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.186864 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.186879 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.186896 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.186907 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:50Z","lastTransitionTime":"2025-12-05T19:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.289573 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.289611 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.289620 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.289635 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.289645 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:50Z","lastTransitionTime":"2025-12-05T19:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.392097 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.392131 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.392141 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.392155 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.392166 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:50Z","lastTransitionTime":"2025-12-05T19:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.446249 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.446286 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.446312 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.446412 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:50 crc kubenswrapper[4828]: E1205 19:04:50.446406 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:04:50 crc kubenswrapper[4828]: E1205 19:04:50.446477 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:04:50 crc kubenswrapper[4828]: E1205 19:04:50.446524 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:04:50 crc kubenswrapper[4828]: E1205 19:04:50.446607 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.494186 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.494238 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.494248 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.494262 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.494274 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:50Z","lastTransitionTime":"2025-12-05T19:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.597735 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.597784 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.597800 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.597858 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.597881 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:50Z","lastTransitionTime":"2025-12-05T19:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.700427 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.700471 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.700485 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.700500 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.700512 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:50Z","lastTransitionTime":"2025-12-05T19:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.802939 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.802996 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.803013 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.803037 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.803055 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:50Z","lastTransitionTime":"2025-12-05T19:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.884991 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ksv4w_e927a669-7d9d-442a-b020-339804e95af2/kube-multus/0.log" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.885041 4828 generic.go:334] "Generic (PLEG): container finished" podID="e927a669-7d9d-442a-b020-339804e95af2" containerID="a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864" exitCode=1 Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.885073 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ksv4w" event={"ID":"e927a669-7d9d-442a-b020-339804e95af2","Type":"ContainerDied","Data":"a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864"} Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.885392 4828 scope.go:117] "RemoveContainer" containerID="a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.900673 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac03ea80-7eac-4147-99e2-7e71ce2d445d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5897df04b9f5ae0fe2d732c74d60c0e3c1c1aecf6fd21dbb3b43dd0f374b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a7d2eb47db1c4257460e84470c6aa096d27899281a73bce5247c7c3b259c183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9c0dadb3b4f125469c4dec525da5f9054191054b32cc0bc7a5b71fad50a494b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cb8e008d76531413d4ec31bfdf79f0fb87a654388f5189ac4e10d3c48d4bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cb8e008d76531413d4ec31bfdf79f0fb87a654388f5189ac4e10d3c48d4bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:50Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.906383 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.906413 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.906422 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.906437 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.906446 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:50Z","lastTransitionTime":"2025-12-05T19:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.918047 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:50Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.929513 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:50Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.942925 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:50Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.957329 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c81d9475b5e3b88e21ba70262d2c74c28a1907cf0241e00ce4eb57a70385e706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:50Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.980252 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://734951387a480f4fea65d79f25b5f83da94782dba3a97f436f059f3f43255298\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://734951387a480f4fea65d79f25b5f83da94782dba3a97f436f059f3f43255298\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:31Z\\\",\\\"message\\\":\\\"uring zone local for Pod openshift-multus/multus-ksv4w in node crc\\\\nI1205 19:04:31.357736 6485 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt\\\\nI1205 19:04:31.357704 6485 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1205 19:04:31.357745 6485 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1205 19:04:31.357695 6485 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI1205 19:04:31.357755 6485 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI1205 19:04:31.357759 6485 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nF1205 19:04:31.357761 6485 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tzshq_openshift-ovn-kubernetes(1be569ff-0725-412f-ac1a-da4f5077bc17)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:50Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:50 crc kubenswrapper[4828]: I1205 19:04:50.990177 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvf6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0595333b-a181-4a2b-90b8-e2accf80e78e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvf6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:50Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.001319 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:50Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.008809 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.008877 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.008894 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.008914 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.008929 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:51Z","lastTransitionTime":"2025-12-05T19:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.012443 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:51Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.020914 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:51Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.029447 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:51Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.039361 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:51Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.049384 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44935bbd-b8fe-44ed-93ac-86eed967e178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fcf44679b22e61763bd01648f92ef1e645b40edd18e5f5f1d577bdd75952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1717f608fb766a765e800128eb9bc99275b75519406b39bf88eb811ea1ea78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dthbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:51Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.069297 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:51Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.086215 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:51Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.099287 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:51Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.111642 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:51Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.112060 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.112076 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.112084 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.112097 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.112107 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:51Z","lastTransitionTime":"2025-12-05T19:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.124050 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:50Z\\\",\\\"message\\\":\\\"2025-12-05T19:04:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e3d10fbb-6acb-4f8e-972d-48658f4f16df\\\\n2025-12-05T19:04:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e3d10fbb-6acb-4f8e-972d-48658f4f16df to /host/opt/cni/bin/\\\\n2025-12-05T19:04:05Z [verbose] multus-daemon started\\\\n2025-12-05T19:04:05Z [verbose] Readiness Indicator file check\\\\n2025-12-05T19:04:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:51Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.214580 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.214614 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.214623 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.214637 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.214647 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:51Z","lastTransitionTime":"2025-12-05T19:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.317170 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.317194 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.317202 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.317214 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.317223 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:51Z","lastTransitionTime":"2025-12-05T19:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.420352 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.420415 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.420441 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.420469 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.420488 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:51Z","lastTransitionTime":"2025-12-05T19:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.522975 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.523015 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.523026 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.523043 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.523055 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:51Z","lastTransitionTime":"2025-12-05T19:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.626298 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.626339 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.626348 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.626364 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.626372 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:51Z","lastTransitionTime":"2025-12-05T19:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.729163 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.729192 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.729199 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.729211 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.729220 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:51Z","lastTransitionTime":"2025-12-05T19:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.831863 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.831905 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.831913 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.831928 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.831937 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:51Z","lastTransitionTime":"2025-12-05T19:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.890314 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ksv4w_e927a669-7d9d-442a-b020-339804e95af2/kube-multus/0.log" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.890371 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ksv4w" event={"ID":"e927a669-7d9d-442a-b020-339804e95af2","Type":"ContainerStarted","Data":"836afc5e512e0143f7845dcdb8e4ca67de1b0558e78ff4e96b2674810b4152d5"} Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.908922 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:51Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.929380 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:51Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.934063 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.934094 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.934105 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.934120 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.934131 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:51Z","lastTransitionTime":"2025-12-05T19:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.939917 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:51Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.951170 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:51Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.961992 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:51Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.974505 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44935bbd-b8fe-44ed-93ac-86eed967e178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fcf44679b22e61763bd01648f92ef1e645b40edd18e5f5f1d577bdd75952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1717f608fb766a765e800128eb9bc99275b75519406b39bf88eb811ea1ea78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dthbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:51Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:51 crc kubenswrapper[4828]: I1205 19:04:51.998801 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:51Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.013300 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:52Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.023500 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:52Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.033771 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:52Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.036673 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.036706 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.036719 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.036735 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.036745 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:52Z","lastTransitionTime":"2025-12-05T19:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.048551 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836afc5e512e0143f7845dcdb8e4ca67de1b0558e78ff4e96b2674810b4152d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:50Z\\\",\\\"message\\\":\\\"2025-12-05T19:04:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e3d10fbb-6acb-4f8e-972d-48658f4f16df\\\\n2025-12-05T19:04:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e3d10fbb-6acb-4f8e-972d-48658f4f16df to /host/opt/cni/bin/\\\\n2025-12-05T19:04:05Z [verbose] multus-daemon started\\\\n2025-12-05T19:04:05Z [verbose] Readiness Indicator file check\\\\n2025-12-05T19:04:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:52Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.059031 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac03ea80-7eac-4147-99e2-7e71ce2d445d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5897df04b9f5ae0fe2d732c74d60c0e3c1c1aecf6fd21dbb3b43dd0f374b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a7d2eb47db1c4257460e84470c6aa096d27899281a73bce5247c7c3b259c183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9c0dadb3b4f125469c4dec525da5f9054191054b32cc0bc7a5b71fad50a494b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cb8e008d76531413d4ec31bfdf79f0fb87a654388f5189ac4e10d3c48d4bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cb8e008d76531413d4ec31bfdf79f0fb87a654388f5189ac4e10d3c48d4bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:52Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.070229 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:52Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.081474 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:52Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.090081 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:52Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.102595 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c81d9475b5e3b88e21ba70262d2c74c28a1907cf0241e00ce4eb57a70385e706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:52Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.122202 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://734951387a480f4fea65d79f25b5f83da94782dba3a97f436f059f3f43255298\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://734951387a480f4fea65d79f25b5f83da94782dba3a97f436f059f3f43255298\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:31Z\\\",\\\"message\\\":\\\"uring zone local for Pod openshift-multus/multus-ksv4w in node crc\\\\nI1205 19:04:31.357736 6485 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt\\\\nI1205 19:04:31.357704 6485 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1205 19:04:31.357745 6485 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1205 19:04:31.357695 6485 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI1205 19:04:31.357755 6485 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI1205 19:04:31.357759 6485 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nF1205 19:04:31.357761 6485 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tzshq_openshift-ovn-kubernetes(1be569ff-0725-412f-ac1a-da4f5077bc17)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:52Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.131694 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvf6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0595333b-a181-4a2b-90b8-e2accf80e78e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvf6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:52Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.139379 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.139398 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.139405 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.139417 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.139425 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:52Z","lastTransitionTime":"2025-12-05T19:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.243125 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.243651 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.244169 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.244389 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.244559 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:52Z","lastTransitionTime":"2025-12-05T19:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.347611 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.347655 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.347666 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.347684 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.347695 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:52Z","lastTransitionTime":"2025-12-05T19:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.445889 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.445945 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.446032 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:52 crc kubenswrapper[4828]: E1205 19:04:52.446024 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:04:52 crc kubenswrapper[4828]: E1205 19:04:52.446130 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.446177 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:52 crc kubenswrapper[4828]: E1205 19:04:52.446219 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:04:52 crc kubenswrapper[4828]: E1205 19:04:52.446258 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.450851 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.450877 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.450885 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.450897 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.450908 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:52Z","lastTransitionTime":"2025-12-05T19:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.465172 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://734951387a480f4fea65d79f25b5f83da94782dba3a97f436f059f3f43255298\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://734951387a480f4fea65d79f25b5f83da94782dba3a97f436f059f3f43255298\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:31Z\\\",\\\"message\\\":\\\"uring zone local for Pod openshift-multus/multus-ksv4w in node crc\\\\nI1205 19:04:31.357736 6485 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt\\\\nI1205 19:04:31.357704 6485 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1205 19:04:31.357745 6485 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1205 19:04:31.357695 6485 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI1205 19:04:31.357755 6485 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI1205 19:04:31.357759 6485 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nF1205 19:04:31.357761 6485 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tzshq_openshift-ovn-kubernetes(1be569ff-0725-412f-ac1a-da4f5077bc17)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:52Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.475631 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvf6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0595333b-a181-4a2b-90b8-e2accf80e78e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvf6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:52Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.486571 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:52Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.498843 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:52Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.510109 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:52Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.519685 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:52Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.530648 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:52Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.542152 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44935bbd-b8fe-44ed-93ac-86eed967e178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fcf44679b22e61763bd01648f92ef1e645b40edd18e5f5f1d577bdd75952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1717f608fb766a765e800128eb9bc99275b75519406b39bf88eb811ea1ea78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dthbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:52Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.553019 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.553058 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.553067 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.553083 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.553092 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:52Z","lastTransitionTime":"2025-12-05T19:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.558505 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:52Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.569019 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:52Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.577965 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:52Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.586952 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:52Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.596426 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836afc5e512e0143f7845dcdb8e4ca67de1b0558e78ff4e96b2674810b4152d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:50Z\\\",\\\"message\\\":\\\"2025-12-05T19:04:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e3d10fbb-6acb-4f8e-972d-48658f4f16df\\\\n2025-12-05T19:04:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e3d10fbb-6acb-4f8e-972d-48658f4f16df to /host/opt/cni/bin/\\\\n2025-12-05T19:04:05Z [verbose] multus-daemon started\\\\n2025-12-05T19:04:05Z [verbose] Readiness Indicator file check\\\\n2025-12-05T19:04:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:52Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.605808 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac03ea80-7eac-4147-99e2-7e71ce2d445d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5897df04b9f5ae0fe2d732c74d60c0e3c1c1aecf6fd21dbb3b43dd0f374b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a7d2eb47db1c4257460e84470c6aa096d27899281a73bce5247c7c3b259c183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9c0dadb3b4f125469c4dec525da5f9054191054b32cc0bc7a5b71fad50a494b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cb8e008d76531413d4ec31bfdf79f0fb87a654388f5189ac4e10d3c48d4bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cb8e008d76531413d4ec31bfdf79f0fb87a654388f5189ac4e10d3c48d4bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:52Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.622757 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:52Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.638670 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:52Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.648914 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:52Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.655360 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.655393 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.655402 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.655417 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.655427 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:52Z","lastTransitionTime":"2025-12-05T19:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.663169 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c81d9475b5e3b88e21ba70262d2c74c28a1907cf0241e00ce4eb57a70385e706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:52Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.770627 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.770692 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.770705 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.770743 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.770757 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:52Z","lastTransitionTime":"2025-12-05T19:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.872658 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.872755 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.872772 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.872794 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.872806 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:52Z","lastTransitionTime":"2025-12-05T19:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.975087 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.975131 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.975140 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.975153 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:52 crc kubenswrapper[4828]: I1205 19:04:52.975163 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:52Z","lastTransitionTime":"2025-12-05T19:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.077333 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.077355 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.077364 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.077376 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.077385 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:53Z","lastTransitionTime":"2025-12-05T19:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.179616 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.179658 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.179675 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.179693 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.179709 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:53Z","lastTransitionTime":"2025-12-05T19:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.289839 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.289894 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.289912 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.289929 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.289942 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:53Z","lastTransitionTime":"2025-12-05T19:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.392286 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.392339 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.392386 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.392405 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.392416 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:53Z","lastTransitionTime":"2025-12-05T19:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.494839 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.494880 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.494893 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.494907 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.494915 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:53Z","lastTransitionTime":"2025-12-05T19:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.596628 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.596661 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.596668 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.596680 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.596688 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:53Z","lastTransitionTime":"2025-12-05T19:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.699949 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.699999 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.700012 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.700029 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.700044 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:53Z","lastTransitionTime":"2025-12-05T19:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.802673 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.802718 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.802729 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.802744 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.802754 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:53Z","lastTransitionTime":"2025-12-05T19:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.904784 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.904812 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.904832 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.904848 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:53 crc kubenswrapper[4828]: I1205 19:04:53.904857 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:53Z","lastTransitionTime":"2025-12-05T19:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.006502 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.006538 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.006549 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.006563 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.006575 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:54Z","lastTransitionTime":"2025-12-05T19:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.108701 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.108733 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.108742 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.108755 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.108766 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:54Z","lastTransitionTime":"2025-12-05T19:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.211588 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.211617 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.211626 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.211639 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.211648 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:54Z","lastTransitionTime":"2025-12-05T19:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.314435 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.314480 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.314491 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.314507 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.314517 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:54Z","lastTransitionTime":"2025-12-05T19:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.417216 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.417257 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.417270 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.417287 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.417298 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:54Z","lastTransitionTime":"2025-12-05T19:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.446102 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.446155 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:54 crc kubenswrapper[4828]: E1205 19:04:54.446236 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.446245 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.446121 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:54 crc kubenswrapper[4828]: E1205 19:04:54.446368 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:04:54 crc kubenswrapper[4828]: E1205 19:04:54.446403 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:04:54 crc kubenswrapper[4828]: E1205 19:04:54.446446 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.519256 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.519307 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.519319 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.519349 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.519361 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:54Z","lastTransitionTime":"2025-12-05T19:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.621863 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.621940 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.621952 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.621971 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.622003 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:54Z","lastTransitionTime":"2025-12-05T19:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.724137 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.724183 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.724194 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.724210 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.724220 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:54Z","lastTransitionTime":"2025-12-05T19:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.826372 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.826444 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.826460 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.826484 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.826500 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:54Z","lastTransitionTime":"2025-12-05T19:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.904525 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.904560 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.904568 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.904584 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.904593 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:54Z","lastTransitionTime":"2025-12-05T19:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:54 crc kubenswrapper[4828]: E1205 19:04:54.923014 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:54Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.926080 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.926119 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.926130 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.926146 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.926158 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:54Z","lastTransitionTime":"2025-12-05T19:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:54 crc kubenswrapper[4828]: E1205 19:04:54.936665 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:54Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.939954 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.939984 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.939992 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.940003 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.940011 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:54Z","lastTransitionTime":"2025-12-05T19:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:54 crc kubenswrapper[4828]: E1205 19:04:54.954756 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:54Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.958322 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.958355 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.958363 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.958378 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.958388 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:54Z","lastTransitionTime":"2025-12-05T19:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:54 crc kubenswrapper[4828]: E1205 19:04:54.968984 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:54Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.972500 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.972528 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.972537 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.972549 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.972557 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:54Z","lastTransitionTime":"2025-12-05T19:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:54 crc kubenswrapper[4828]: E1205 19:04:54.983747 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:54Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:54 crc kubenswrapper[4828]: E1205 19:04:54.983955 4828 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.989508 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.989725 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.989882 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.990053 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:54 crc kubenswrapper[4828]: I1205 19:04:54.990174 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:54Z","lastTransitionTime":"2025-12-05T19:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.092876 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.093096 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.093185 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.093251 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.093332 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:55Z","lastTransitionTime":"2025-12-05T19:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.195799 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.195906 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.195932 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.195964 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.195986 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:55Z","lastTransitionTime":"2025-12-05T19:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.299264 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.299338 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.299363 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.299392 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.299412 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:55Z","lastTransitionTime":"2025-12-05T19:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.402095 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.402144 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.402157 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.402174 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.402187 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:55Z","lastTransitionTime":"2025-12-05T19:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.504353 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.504406 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.504418 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.504433 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.504445 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:55Z","lastTransitionTime":"2025-12-05T19:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.607155 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.607195 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.607206 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.607224 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.607236 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:55Z","lastTransitionTime":"2025-12-05T19:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.710076 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.710185 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.710206 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.710272 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.710291 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:55Z","lastTransitionTime":"2025-12-05T19:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.813480 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.813536 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.813553 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.813575 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.813592 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:55Z","lastTransitionTime":"2025-12-05T19:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.916459 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.916531 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.916552 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.916580 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:55 crc kubenswrapper[4828]: I1205 19:04:55.916614 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:55Z","lastTransitionTime":"2025-12-05T19:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.019912 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.019989 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.020009 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.020034 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.020053 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:56Z","lastTransitionTime":"2025-12-05T19:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.122538 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.122594 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.122613 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.122642 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.122665 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:56Z","lastTransitionTime":"2025-12-05T19:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.225968 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.226045 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.226057 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.226077 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.226091 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:56Z","lastTransitionTime":"2025-12-05T19:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.329007 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.329046 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.329056 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.329070 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.329079 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:56Z","lastTransitionTime":"2025-12-05T19:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.432624 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.432690 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.432707 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.432732 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.432751 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:56Z","lastTransitionTime":"2025-12-05T19:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.446327 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:56 crc kubenswrapper[4828]: E1205 19:04:56.446471 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.446686 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:56 crc kubenswrapper[4828]: E1205 19:04:56.446747 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.446918 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:56 crc kubenswrapper[4828]: E1205 19:04:56.446997 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.447030 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:56 crc kubenswrapper[4828]: E1205 19:04:56.447103 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.535705 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.535747 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.535770 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.535796 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.535811 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:56Z","lastTransitionTime":"2025-12-05T19:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.638768 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.638869 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.638904 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.638935 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.638956 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:56Z","lastTransitionTime":"2025-12-05T19:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.742857 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.742929 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.742951 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.742981 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.743002 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:56Z","lastTransitionTime":"2025-12-05T19:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.846063 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.846149 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.846167 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.846191 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.846208 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:56Z","lastTransitionTime":"2025-12-05T19:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.948439 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.948507 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.948526 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.948549 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:56 crc kubenswrapper[4828]: I1205 19:04:56.948564 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:56Z","lastTransitionTime":"2025-12-05T19:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.050777 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.050812 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.050838 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.050854 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.050866 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:57Z","lastTransitionTime":"2025-12-05T19:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.154392 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.154468 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.154487 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.154511 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.154528 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:57Z","lastTransitionTime":"2025-12-05T19:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.256929 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.256990 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.257013 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.257041 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.257062 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:57Z","lastTransitionTime":"2025-12-05T19:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.360385 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.360449 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.360467 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.360491 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.360507 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:57Z","lastTransitionTime":"2025-12-05T19:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.463678 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.463751 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.463773 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.463862 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.463895 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:57Z","lastTransitionTime":"2025-12-05T19:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.566657 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.566713 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.566729 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.566749 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.566763 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:57Z","lastTransitionTime":"2025-12-05T19:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.670446 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.670510 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.670550 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.670586 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.670611 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:57Z","lastTransitionTime":"2025-12-05T19:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.774067 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.774135 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.774153 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.774177 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.774194 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:57Z","lastTransitionTime":"2025-12-05T19:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.877760 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.877802 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.877813 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.877854 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.877872 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:57Z","lastTransitionTime":"2025-12-05T19:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.980721 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.980773 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.980794 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.980859 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:57 crc kubenswrapper[4828]: I1205 19:04:57.980882 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:57Z","lastTransitionTime":"2025-12-05T19:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.084011 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.084097 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.084120 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.084149 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.084172 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:58Z","lastTransitionTime":"2025-12-05T19:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.187259 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.187325 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.187342 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.187366 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.187383 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:58Z","lastTransitionTime":"2025-12-05T19:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.290257 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.290325 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.290345 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.290371 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.290390 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:58Z","lastTransitionTime":"2025-12-05T19:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.393493 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.393582 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.393599 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.393623 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.393643 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:58Z","lastTransitionTime":"2025-12-05T19:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.446357 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.446414 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.446388 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:04:58 crc kubenswrapper[4828]: E1205 19:04:58.446577 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:04:58 crc kubenswrapper[4828]: E1205 19:04:58.446674 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:04:58 crc kubenswrapper[4828]: E1205 19:04:58.446784 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.447051 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:04:58 crc kubenswrapper[4828]: E1205 19:04:58.447163 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.496886 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.496948 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.496972 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.497001 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.497019 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:58Z","lastTransitionTime":"2025-12-05T19:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.599898 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.600001 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.600039 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.600076 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.600099 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:58Z","lastTransitionTime":"2025-12-05T19:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.702964 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.703014 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.703025 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.703042 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.703052 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:58Z","lastTransitionTime":"2025-12-05T19:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.805778 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.805856 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.805874 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.805896 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.805909 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:58Z","lastTransitionTime":"2025-12-05T19:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.909465 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.909520 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.909533 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.909554 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:58 crc kubenswrapper[4828]: I1205 19:04:58.909567 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:58Z","lastTransitionTime":"2025-12-05T19:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.012741 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.012904 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.012934 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.012964 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.012984 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:59Z","lastTransitionTime":"2025-12-05T19:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.115943 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.116050 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.116077 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.116108 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.116132 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:59Z","lastTransitionTime":"2025-12-05T19:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.219592 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.219662 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.219679 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.219705 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.219726 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:59Z","lastTransitionTime":"2025-12-05T19:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.322867 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.322916 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.322930 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.322959 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.322977 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:59Z","lastTransitionTime":"2025-12-05T19:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.426777 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.426816 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.426839 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.426860 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.426878 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:59Z","lastTransitionTime":"2025-12-05T19:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.446920 4828 scope.go:117] "RemoveContainer" containerID="734951387a480f4fea65d79f25b5f83da94782dba3a97f436f059f3f43255298" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.528915 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.528999 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.529017 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.529038 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.529053 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:59Z","lastTransitionTime":"2025-12-05T19:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.631956 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.631992 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.632002 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.632017 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.632028 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:59Z","lastTransitionTime":"2025-12-05T19:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.734732 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.734779 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.734792 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.734810 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.734851 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:59Z","lastTransitionTime":"2025-12-05T19:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.837703 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.837739 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.837751 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.837764 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.837774 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:59Z","lastTransitionTime":"2025-12-05T19:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.924546 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tzshq_1be569ff-0725-412f-ac1a-da4f5077bc17/ovnkube-controller/2.log" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.928457 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" event={"ID":"1be569ff-0725-412f-ac1a-da4f5077bc17","Type":"ContainerStarted","Data":"b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd"} Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.929120 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.940276 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.940321 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.940350 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.940369 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.940383 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:04:59Z","lastTransitionTime":"2025-12-05T19:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.954106 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:59Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.969762 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:59Z is after 2025-08-24T17:21:41Z" Dec 05 19:04:59 crc kubenswrapper[4828]: I1205 19:04:59.987963 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:04:59Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.005219 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:00Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.019198 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836afc5e512e0143f7845dcdb8e4ca67de1b0558e78ff4e96b2674810b4152d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:50Z\\\",\\\"message\\\":\\\"2025-12-05T19:04:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e3d10fbb-6acb-4f8e-972d-48658f4f16df\\\\n2025-12-05T19:04:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e3d10fbb-6acb-4f8e-972d-48658f4f16df to /host/opt/cni/bin/\\\\n2025-12-05T19:04:05Z [verbose] multus-daemon started\\\\n2025-12-05T19:04:05Z [verbose] Readiness Indicator file check\\\\n2025-12-05T19:04:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:00Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.032076 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac03ea80-7eac-4147-99e2-7e71ce2d445d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5897df04b9f5ae0fe2d732c74d60c0e3c1c1aecf6fd21dbb3b43dd0f374b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a7d2eb47db1c4257460e84470c6aa096d27899281a73bce5247c7c3b259c183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9c0dadb3b4f125469c4dec525da5f9054191054b32cc0bc7a5b71fad50a494b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cb8e008d76531413d4ec31bfdf79f0fb87a654388f5189ac4e10d3c48d4bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cb8e008d76531413d4ec31bfdf79f0fb87a654388f5189ac4e10d3c48d4bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:00Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.042953 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.043001 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.043012 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.043027 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.043039 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:00Z","lastTransitionTime":"2025-12-05T19:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.047062 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:00Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.058459 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:00Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.068738 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:00Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.083957 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c81d9475b5e3b88e21ba70262d2c74c28a1907cf0241e00ce4eb57a70385e706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:00Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.101735 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://734951387a480f4fea65d79f25b5f83da94782dba3a97f436f059f3f43255298\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:31Z\\\",\\\"message\\\":\\\"uring zone local for Pod openshift-multus/multus-ksv4w in node crc\\\\nI1205 19:04:31.357736 6485 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt\\\\nI1205 19:04:31.357704 6485 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1205 19:04:31.357745 6485 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1205 19:04:31.357695 6485 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI1205 19:04:31.357755 6485 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI1205 19:04:31.357759 6485 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nF1205 19:04:31.357761 6485 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:00Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.111927 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvf6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0595333b-a181-4a2b-90b8-e2accf80e78e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvf6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:00Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.124740 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:00Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.137211 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:00Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.145731 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.145777 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.145790 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.145809 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.145838 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:00Z","lastTransitionTime":"2025-12-05T19:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.149418 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:00Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.160362 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:00Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.170327 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:00Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.178601 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44935bbd-b8fe-44ed-93ac-86eed967e178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fcf44679b22e61763bd01648f92ef1e645b40edd18e5f5f1d577bdd75952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1717f608fb766a765e800128eb9bc99275b75519406b39bf88eb811ea1ea78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dthbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:00Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.248397 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.248445 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.248456 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.248474 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.248486 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:00Z","lastTransitionTime":"2025-12-05T19:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.351581 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.351653 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.351671 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.351696 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.351715 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:00Z","lastTransitionTime":"2025-12-05T19:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.446048 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.446116 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.446173 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:00 crc kubenswrapper[4828]: E1205 19:05:00.446377 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.446399 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:00 crc kubenswrapper[4828]: E1205 19:05:00.446558 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:05:00 crc kubenswrapper[4828]: E1205 19:05:00.446806 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:05:00 crc kubenswrapper[4828]: E1205 19:05:00.446989 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.459564 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.459738 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.459803 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.459878 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.459908 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:00Z","lastTransitionTime":"2025-12-05T19:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.563548 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.563676 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.563700 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.563731 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.563783 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:00Z","lastTransitionTime":"2025-12-05T19:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.667122 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.667191 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.667214 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.667247 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.667271 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:00Z","lastTransitionTime":"2025-12-05T19:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.770457 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.770510 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.770534 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.770565 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.770588 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:00Z","lastTransitionTime":"2025-12-05T19:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.874112 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.874185 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.874203 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.874227 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.874246 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:00Z","lastTransitionTime":"2025-12-05T19:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.934169 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tzshq_1be569ff-0725-412f-ac1a-da4f5077bc17/ovnkube-controller/3.log" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.935207 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tzshq_1be569ff-0725-412f-ac1a-da4f5077bc17/ovnkube-controller/2.log" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.939285 4828 generic.go:334] "Generic (PLEG): container finished" podID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerID="b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd" exitCode=1 Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.939329 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" event={"ID":"1be569ff-0725-412f-ac1a-da4f5077bc17","Type":"ContainerDied","Data":"b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd"} Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.939376 4828 scope.go:117] "RemoveContainer" containerID="734951387a480f4fea65d79f25b5f83da94782dba3a97f436f059f3f43255298" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.940636 4828 scope.go:117] "RemoveContainer" containerID="b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd" Dec 05 19:05:00 crc kubenswrapper[4828]: E1205 19:05:00.941025 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tzshq_openshift-ovn-kubernetes(1be569ff-0725-412f-ac1a-da4f5077bc17)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.965379 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://734951387a480f4fea65d79f25b5f83da94782dba3a97f436f059f3f43255298\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:31Z\\\",\\\"message\\\":\\\"uring zone local for Pod openshift-multus/multus-ksv4w in node crc\\\\nI1205 19:04:31.357736 6485 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt\\\\nI1205 19:04:31.357704 6485 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1205 19:04:31.357745 6485 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1205 19:04:31.357695 6485 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI1205 19:04:31.357755 6485 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI1205 19:04:31.357759 6485 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nF1205 19:04:31.357761 6485 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:05:00Z\\\",\\\"message\\\":\\\"/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1205 19:05:00.224392 6851 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1205 19:05:00.224895 6851 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:00Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.978467 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvf6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0595333b-a181-4a2b-90b8-e2accf80e78e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvf6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:00Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.979149 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.979198 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.979207 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.979224 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.979232 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:00Z","lastTransitionTime":"2025-12-05T19:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:00 crc kubenswrapper[4828]: I1205 19:05:00.993405 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:00Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.006697 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.019219 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.031379 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.045747 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.059281 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44935bbd-b8fe-44ed-93ac-86eed967e178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fcf44679b22e61763bd01648f92ef1e645b40edd18e5f5f1d577bdd75952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1717f608fb766a765e800128eb9bc99275b75519406b39bf88eb811ea1ea78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dthbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.078096 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.086093 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.086181 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.086209 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.086243 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.086276 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:01Z","lastTransitionTime":"2025-12-05T19:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.095400 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.108817 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.122287 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.135377 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836afc5e512e0143f7845dcdb8e4ca67de1b0558e78ff4e96b2674810b4152d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:50Z\\\",\\\"message\\\":\\\"2025-12-05T19:04:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e3d10fbb-6acb-4f8e-972d-48658f4f16df\\\\n2025-12-05T19:04:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e3d10fbb-6acb-4f8e-972d-48658f4f16df to /host/opt/cni/bin/\\\\n2025-12-05T19:04:05Z [verbose] multus-daemon started\\\\n2025-12-05T19:04:05Z [verbose] Readiness Indicator file check\\\\n2025-12-05T19:04:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.146929 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac03ea80-7eac-4147-99e2-7e71ce2d445d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5897df04b9f5ae0fe2d732c74d60c0e3c1c1aecf6fd21dbb3b43dd0f374b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a7d2eb47db1c4257460e84470c6aa096d27899281a73bce5247c7c3b259c183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9c0dadb3b4f125469c4dec525da5f9054191054b32cc0bc7a5b71fad50a494b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cb8e008d76531413d4ec31bfdf79f0fb87a654388f5189ac4e10d3c48d4bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cb8e008d76531413d4ec31bfdf79f0fb87a654388f5189ac4e10d3c48d4bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.162980 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.177355 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.189342 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.189379 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.189390 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.189405 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.189416 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:01Z","lastTransitionTime":"2025-12-05T19:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.192639 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.207380 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c81d9475b5e3b88e21ba70262d2c74c28a1907cf0241e00ce4eb57a70385e706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.292266 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.292351 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.292388 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.292413 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.292431 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:01Z","lastTransitionTime":"2025-12-05T19:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.394954 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.395207 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.395217 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.395230 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.395239 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:01Z","lastTransitionTime":"2025-12-05T19:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.497629 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.497680 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.497693 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.497712 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.497722 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:01Z","lastTransitionTime":"2025-12-05T19:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.601197 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.601257 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.601281 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.601343 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.601368 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:01Z","lastTransitionTime":"2025-12-05T19:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.704458 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.704533 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.704555 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.704583 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.704608 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:01Z","lastTransitionTime":"2025-12-05T19:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.807107 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.807182 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.807202 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.807230 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.807248 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:01Z","lastTransitionTime":"2025-12-05T19:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.910128 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.910190 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.910210 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.910240 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.910261 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:01Z","lastTransitionTime":"2025-12-05T19:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.945331 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tzshq_1be569ff-0725-412f-ac1a-da4f5077bc17/ovnkube-controller/3.log" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.949635 4828 scope.go:117] "RemoveContainer" containerID="b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd" Dec 05 19:05:01 crc kubenswrapper[4828]: E1205 19:05:01.949864 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tzshq_openshift-ovn-kubernetes(1be569ff-0725-412f-ac1a-da4f5077bc17)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.965700 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac03ea80-7eac-4147-99e2-7e71ce2d445d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5897df04b9f5ae0fe2d732c74d60c0e3c1c1aecf6fd21dbb3b43dd0f374b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a7d2eb47db1c4257460e84470c6aa096d27899281a73bce5247c7c3b259c183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9c0dadb3b4f125469c4dec525da5f9054191054b32cc0bc7a5b71fad50a494b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cb8e008d76531413d4ec31bfdf79f0fb87a654388f5189ac4e10d3c48d4bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cb8e008d76531413d4ec31bfdf79f0fb87a654388f5189ac4e10d3c48d4bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.980268 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:01 crc kubenswrapper[4828]: I1205 19:05:01.995263 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:01Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.012078 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.013109 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.013153 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.013172 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.013194 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.013208 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:02Z","lastTransitionTime":"2025-12-05T19:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.028099 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c81d9475b5e3b88e21ba70262d2c74c28a1907cf0241e00ce4eb57a70385e706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.050172 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:05:00Z\\\",\\\"message\\\":\\\"/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1205 19:05:00.224392 6851 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1205 19:05:00.224895 6851 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tzshq_openshift-ovn-kubernetes(1be569ff-0725-412f-ac1a-da4f5077bc17)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.063507 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvf6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0595333b-a181-4a2b-90b8-e2accf80e78e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvf6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.075810 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.088152 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.098445 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.110266 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.117923 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.117975 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.117991 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.118013 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.118030 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:02Z","lastTransitionTime":"2025-12-05T19:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.121370 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.135249 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44935bbd-b8fe-44ed-93ac-86eed967e178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fcf44679b22e61763bd01648f92ef1e645b40edd18e5f5f1d577bdd75952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1717f608fb766a765e800128eb9bc99275b75519406b39bf88eb811ea1ea78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dthbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.170032 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.190771 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.207316 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.219916 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.220173 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.220334 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.220525 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.220709 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:02Z","lastTransitionTime":"2025-12-05T19:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.221876 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.236699 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836afc5e512e0143f7845dcdb8e4ca67de1b0558e78ff4e96b2674810b4152d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:50Z\\\",\\\"message\\\":\\\"2025-12-05T19:04:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e3d10fbb-6acb-4f8e-972d-48658f4f16df\\\\n2025-12-05T19:04:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e3d10fbb-6acb-4f8e-972d-48658f4f16df to /host/opt/cni/bin/\\\\n2025-12-05T19:04:05Z [verbose] multus-daemon started\\\\n2025-12-05T19:04:05Z [verbose] Readiness Indicator file check\\\\n2025-12-05T19:04:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.323532 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.324043 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.324162 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.324280 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.324394 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:02Z","lastTransitionTime":"2025-12-05T19:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.427102 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.427178 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.427202 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.427230 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.427251 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:02Z","lastTransitionTime":"2025-12-05T19:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.445462 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.445617 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.445475 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.445680 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:02 crc kubenswrapper[4828]: E1205 19:05:02.445673 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:05:02 crc kubenswrapper[4828]: E1205 19:05:02.445876 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:05:02 crc kubenswrapper[4828]: E1205 19:05:02.445970 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:05:02 crc kubenswrapper[4828]: E1205 19:05:02.446074 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.467135 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.489711 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.505392 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.522198 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.529425 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.529448 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.529459 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.529473 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.529486 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:02Z","lastTransitionTime":"2025-12-05T19:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.540695 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.555595 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44935bbd-b8fe-44ed-93ac-86eed967e178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fcf44679b22e61763bd01648f92ef1e645b40edd18e5f5f1d577bdd75952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1717f608fb766a765e800128eb9bc99275b75519406b39bf88eb811ea1ea78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dthbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.584641 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.604040 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.622672 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.632391 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.632436 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.632454 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.632476 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.632491 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:02Z","lastTransitionTime":"2025-12-05T19:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.641436 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.670709 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836afc5e512e0143f7845dcdb8e4ca67de1b0558e78ff4e96b2674810b4152d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:50Z\\\",\\\"message\\\":\\\"2025-12-05T19:04:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e3d10fbb-6acb-4f8e-972d-48658f4f16df\\\\n2025-12-05T19:04:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e3d10fbb-6acb-4f8e-972d-48658f4f16df to /host/opt/cni/bin/\\\\n2025-12-05T19:04:05Z [verbose] multus-daemon started\\\\n2025-12-05T19:04:05Z [verbose] Readiness Indicator file check\\\\n2025-12-05T19:04:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.691335 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac03ea80-7eac-4147-99e2-7e71ce2d445d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5897df04b9f5ae0fe2d732c74d60c0e3c1c1aecf6fd21dbb3b43dd0f374b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a7d2eb47db1c4257460e84470c6aa096d27899281a73bce5247c7c3b259c183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9c0dadb3b4f125469c4dec525da5f9054191054b32cc0bc7a5b71fad50a494b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cb8e008d76531413d4ec31bfdf79f0fb87a654388f5189ac4e10d3c48d4bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cb8e008d76531413d4ec31bfdf79f0fb87a654388f5189ac4e10d3c48d4bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.707922 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.723955 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.734213 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.734364 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.734393 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.734402 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.734417 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.734431 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:02Z","lastTransitionTime":"2025-12-05T19:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.748293 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c81d9475b5e3b88e21ba70262d2c74c28a1907cf0241e00ce4eb57a70385e706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.772603 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:05:00Z\\\",\\\"message\\\":\\\"/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1205 19:05:00.224392 6851 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1205 19:05:00.224895 6851 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tzshq_openshift-ovn-kubernetes(1be569ff-0725-412f-ac1a-da4f5077bc17)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.785770 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvf6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0595333b-a181-4a2b-90b8-e2accf80e78e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvf6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:02Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.836131 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.836168 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.836179 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.836193 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.836204 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:02Z","lastTransitionTime":"2025-12-05T19:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.938476 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.938514 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.938525 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.938541 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:02 crc kubenswrapper[4828]: I1205 19:05:02.938551 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:02Z","lastTransitionTime":"2025-12-05T19:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.041617 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.042302 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.042356 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.042387 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.042408 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:03Z","lastTransitionTime":"2025-12-05T19:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.145096 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.145254 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.145277 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.145305 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.145326 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:03Z","lastTransitionTime":"2025-12-05T19:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.247948 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.248013 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.248035 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.248066 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.248089 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:03Z","lastTransitionTime":"2025-12-05T19:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.351297 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.351385 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.351414 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.351450 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.351476 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:03Z","lastTransitionTime":"2025-12-05T19:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.454374 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.454443 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.454464 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.454486 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.454502 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:03Z","lastTransitionTime":"2025-12-05T19:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.557916 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.557981 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.558000 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.558026 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.558045 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:03Z","lastTransitionTime":"2025-12-05T19:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.660176 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.660214 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.660222 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.660236 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:03 crc kubenswrapper[4828]: I1205 19:05:03.660244 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:03Z","lastTransitionTime":"2025-12-05T19:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.147674 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.148730 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.148858 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.148955 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.149047 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:04Z","lastTransitionTime":"2025-12-05T19:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.252258 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.252308 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.252329 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.252357 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.252376 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:04Z","lastTransitionTime":"2025-12-05T19:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.303448 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:04 crc kubenswrapper[4828]: E1205 19:05:04.303629 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 19:05:04 crc kubenswrapper[4828]: E1205 19:05:04.303652 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 19:05:04 crc kubenswrapper[4828]: E1205 19:05:04.303664 4828 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:05:04 crc kubenswrapper[4828]: E1205 19:05:04.303711 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 19:06:08.30369455 +0000 UTC m=+146.198916856 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.355111 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.355137 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.355145 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.355157 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.355165 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:04Z","lastTransitionTime":"2025-12-05T19:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.404280 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.404451 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.404509 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.404547 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:04 crc kubenswrapper[4828]: E1205 19:05:04.404691 4828 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 19:05:04 crc kubenswrapper[4828]: E1205 19:05:04.404757 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 19:06:08.404737013 +0000 UTC m=+146.299959359 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 19:05:04 crc kubenswrapper[4828]: E1205 19:05:04.405058 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:08.405041731 +0000 UTC m=+146.300264067 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:05:04 crc kubenswrapper[4828]: E1205 19:05:04.405191 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 19:05:04 crc kubenswrapper[4828]: E1205 19:05:04.405223 4828 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 19:05:04 crc kubenswrapper[4828]: E1205 19:05:04.405243 4828 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:05:04 crc kubenswrapper[4828]: E1205 19:05:04.405287 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 19:06:08.405272787 +0000 UTC m=+146.300495123 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 19:05:04 crc kubenswrapper[4828]: E1205 19:05:04.405194 4828 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 19:05:04 crc kubenswrapper[4828]: E1205 19:05:04.405399 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 19:06:08.405377259 +0000 UTC m=+146.300599615 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.445868 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:04 crc kubenswrapper[4828]: E1205 19:05:04.446041 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.446253 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.446318 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:04 crc kubenswrapper[4828]: E1205 19:05:04.446440 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.446461 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:04 crc kubenswrapper[4828]: E1205 19:05:04.446523 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:05:04 crc kubenswrapper[4828]: E1205 19:05:04.446595 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.456872 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.456938 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.456982 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.457004 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.457022 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:04Z","lastTransitionTime":"2025-12-05T19:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.560328 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.560412 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.560442 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.560470 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.560493 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:04Z","lastTransitionTime":"2025-12-05T19:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.663205 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.663265 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.663284 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.663306 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.663323 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:04Z","lastTransitionTime":"2025-12-05T19:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.765366 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.765399 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.765410 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.765428 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.765444 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:04Z","lastTransitionTime":"2025-12-05T19:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.867657 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.867694 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.867708 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.867726 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.867740 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:04Z","lastTransitionTime":"2025-12-05T19:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.971219 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.971272 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.971290 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.971313 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:04 crc kubenswrapper[4828]: I1205 19:05:04.971331 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:04Z","lastTransitionTime":"2025-12-05T19:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.062293 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.062353 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.062375 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.062588 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.062642 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:05Z","lastTransitionTime":"2025-12-05T19:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:05 crc kubenswrapper[4828]: E1205 19:05:05.084906 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:05Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.090652 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.090714 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.090731 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.090752 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.090767 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:05Z","lastTransitionTime":"2025-12-05T19:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:05 crc kubenswrapper[4828]: E1205 19:05:05.109817 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:05Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.114386 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.114488 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.114514 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.114544 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.114567 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:05Z","lastTransitionTime":"2025-12-05T19:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:05 crc kubenswrapper[4828]: E1205 19:05:05.128701 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:05Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.132354 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.132399 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.132417 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.132440 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.132456 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:05Z","lastTransitionTime":"2025-12-05T19:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:05 crc kubenswrapper[4828]: E1205 19:05:05.145035 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:05Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.148194 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.148224 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.148234 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.148248 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.148258 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:05Z","lastTransitionTime":"2025-12-05T19:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:05 crc kubenswrapper[4828]: E1205 19:05:05.159575 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:05Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:05 crc kubenswrapper[4828]: E1205 19:05:05.159725 4828 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.161271 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.161308 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.161319 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.161339 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.161352 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:05Z","lastTransitionTime":"2025-12-05T19:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.264283 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.264346 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.264361 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.264377 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.264388 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:05Z","lastTransitionTime":"2025-12-05T19:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.366972 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.367068 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.367078 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.367094 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.367106 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:05Z","lastTransitionTime":"2025-12-05T19:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.463040 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.469545 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.469603 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.469627 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.469654 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.469678 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:05Z","lastTransitionTime":"2025-12-05T19:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.578140 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.578245 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.578264 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.578288 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.578307 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:05Z","lastTransitionTime":"2025-12-05T19:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.681741 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.682135 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.682146 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.682162 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.682173 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:05Z","lastTransitionTime":"2025-12-05T19:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.785652 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.785762 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.785790 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.785815 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.785868 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:05Z","lastTransitionTime":"2025-12-05T19:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.888083 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.888127 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.888143 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.888164 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.888180 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:05Z","lastTransitionTime":"2025-12-05T19:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.991124 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.991188 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.991202 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.991220 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:05 crc kubenswrapper[4828]: I1205 19:05:05.991232 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:05Z","lastTransitionTime":"2025-12-05T19:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.094189 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.094254 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.094276 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.094345 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.094367 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:06Z","lastTransitionTime":"2025-12-05T19:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.197623 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.197700 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.197723 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.197751 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.197774 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:06Z","lastTransitionTime":"2025-12-05T19:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.301125 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.301185 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.301204 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.301232 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.301255 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:06Z","lastTransitionTime":"2025-12-05T19:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.404655 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.404773 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.404797 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.404853 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.404880 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:06Z","lastTransitionTime":"2025-12-05T19:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.446454 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.446498 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.446574 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:06 crc kubenswrapper[4828]: E1205 19:05:06.446803 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:05:06 crc kubenswrapper[4828]: E1205 19:05:06.446928 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:05:06 crc kubenswrapper[4828]: E1205 19:05:06.447038 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.447158 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:06 crc kubenswrapper[4828]: E1205 19:05:06.447262 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.508605 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.508688 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.508711 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.508740 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.508765 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:06Z","lastTransitionTime":"2025-12-05T19:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.612367 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.612456 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.612482 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.612513 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.612536 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:06Z","lastTransitionTime":"2025-12-05T19:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.716488 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.716618 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.716711 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.716813 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.716882 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:06Z","lastTransitionTime":"2025-12-05T19:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.820111 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.820197 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.820210 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.820235 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.820251 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:06Z","lastTransitionTime":"2025-12-05T19:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.923577 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.923646 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.923667 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.923692 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:06 crc kubenswrapper[4828]: I1205 19:05:06.923711 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:06Z","lastTransitionTime":"2025-12-05T19:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.026397 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.026451 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.026495 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.026514 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.026525 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:07Z","lastTransitionTime":"2025-12-05T19:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.129933 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.129990 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.130007 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.130028 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.130047 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:07Z","lastTransitionTime":"2025-12-05T19:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.233359 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.233407 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.233451 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.233468 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.233916 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:07Z","lastTransitionTime":"2025-12-05T19:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.337110 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.337163 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.337181 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.337205 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.337222 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:07Z","lastTransitionTime":"2025-12-05T19:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.440704 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.440778 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.440796 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.440850 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.440870 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:07Z","lastTransitionTime":"2025-12-05T19:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.543658 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.544059 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.544206 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.544350 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.544491 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:07Z","lastTransitionTime":"2025-12-05T19:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.647170 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.647233 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.647255 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.647283 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.647301 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:07Z","lastTransitionTime":"2025-12-05T19:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.749901 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.749972 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.749990 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.750014 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.750033 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:07Z","lastTransitionTime":"2025-12-05T19:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.852630 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.852897 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.853015 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.853088 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.853147 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:07Z","lastTransitionTime":"2025-12-05T19:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.956588 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.956912 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.957162 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.957264 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:07 crc kubenswrapper[4828]: I1205 19:05:07.957344 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:07Z","lastTransitionTime":"2025-12-05T19:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.059704 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.059739 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.059769 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.059785 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.059793 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:08Z","lastTransitionTime":"2025-12-05T19:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.162165 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.162219 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.162236 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.162259 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.162273 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:08Z","lastTransitionTime":"2025-12-05T19:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.264767 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.264846 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.264861 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.264880 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.264892 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:08Z","lastTransitionTime":"2025-12-05T19:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.367693 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.367773 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.367796 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.367849 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.367867 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:08Z","lastTransitionTime":"2025-12-05T19:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.446478 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.446995 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:08 crc kubenswrapper[4828]: E1205 19:05:08.447174 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.446612 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.446636 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:08 crc kubenswrapper[4828]: E1205 19:05:08.454790 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:05:08 crc kubenswrapper[4828]: E1205 19:05:08.455235 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:05:08 crc kubenswrapper[4828]: E1205 19:05:08.455412 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.469580 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.469642 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.469656 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.469672 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.469683 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:08Z","lastTransitionTime":"2025-12-05T19:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.572752 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.572800 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.572813 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.572856 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.572871 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:08Z","lastTransitionTime":"2025-12-05T19:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.675806 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.675887 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.675903 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.675926 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.675940 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:08Z","lastTransitionTime":"2025-12-05T19:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.779206 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.779274 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.779297 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.779324 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.779428 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:08Z","lastTransitionTime":"2025-12-05T19:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.882402 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.882473 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.882497 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.882529 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.882549 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:08Z","lastTransitionTime":"2025-12-05T19:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.985665 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.985727 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.985745 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.985770 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:08 crc kubenswrapper[4828]: I1205 19:05:08.985790 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:08Z","lastTransitionTime":"2025-12-05T19:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.089137 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.089202 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.089224 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.089252 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.089271 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:09Z","lastTransitionTime":"2025-12-05T19:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.192594 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.192946 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.193135 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.193269 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.193394 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:09Z","lastTransitionTime":"2025-12-05T19:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.296775 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.297682 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.297927 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.298145 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.298340 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:09Z","lastTransitionTime":"2025-12-05T19:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.401084 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.401132 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.401148 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.401171 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.401185 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:09Z","lastTransitionTime":"2025-12-05T19:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.503769 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.504315 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.504562 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.504734 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.504931 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:09Z","lastTransitionTime":"2025-12-05T19:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.607274 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.607314 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.607325 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.607343 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.607356 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:09Z","lastTransitionTime":"2025-12-05T19:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.710041 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.710110 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.710133 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.710177 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.710201 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:09Z","lastTransitionTime":"2025-12-05T19:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.813461 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.813579 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.813606 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.813630 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.813647 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:09Z","lastTransitionTime":"2025-12-05T19:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.916132 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.916196 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.916213 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.916235 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:09 crc kubenswrapper[4828]: I1205 19:05:09.916253 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:09Z","lastTransitionTime":"2025-12-05T19:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.019713 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.019789 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.019805 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.019861 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.019882 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:10Z","lastTransitionTime":"2025-12-05T19:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.122715 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.122783 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.122799 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.122865 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.122889 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:10Z","lastTransitionTime":"2025-12-05T19:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.226122 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.226174 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.226191 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.226215 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.226233 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:10Z","lastTransitionTime":"2025-12-05T19:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.329303 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.329364 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.329376 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.329392 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.329402 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:10Z","lastTransitionTime":"2025-12-05T19:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.431512 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.431575 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.431595 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.431621 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.431639 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:10Z","lastTransitionTime":"2025-12-05T19:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.445956 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.446036 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.446045 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:10 crc kubenswrapper[4828]: E1205 19:05:10.446138 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:05:10 crc kubenswrapper[4828]: E1205 19:05:10.446271 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:05:10 crc kubenswrapper[4828]: E1205 19:05:10.446371 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.446471 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:10 crc kubenswrapper[4828]: E1205 19:05:10.446668 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.534881 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.535216 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.535308 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.535395 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.535488 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:10Z","lastTransitionTime":"2025-12-05T19:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.638014 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.638072 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.638090 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.638114 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.638130 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:10Z","lastTransitionTime":"2025-12-05T19:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.741227 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.741567 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.741636 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.741746 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.741845 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:10Z","lastTransitionTime":"2025-12-05T19:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.843612 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.843653 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.843664 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.843679 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.843690 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:10Z","lastTransitionTime":"2025-12-05T19:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.946450 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.946729 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.946809 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.946938 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:10 crc kubenswrapper[4828]: I1205 19:05:10.947021 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:10Z","lastTransitionTime":"2025-12-05T19:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.049362 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.049639 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.049737 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.049862 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.049956 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:11Z","lastTransitionTime":"2025-12-05T19:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.153257 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.153309 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.153326 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.153350 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.153366 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:11Z","lastTransitionTime":"2025-12-05T19:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.256182 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.256242 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.256279 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.256326 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.256357 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:11Z","lastTransitionTime":"2025-12-05T19:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.360464 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.360529 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.360548 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.360572 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.360589 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:11Z","lastTransitionTime":"2025-12-05T19:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.464516 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.464987 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.465180 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.465329 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.465463 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:11Z","lastTransitionTime":"2025-12-05T19:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.569281 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.569631 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.569882 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.570144 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.570369 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:11Z","lastTransitionTime":"2025-12-05T19:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.673728 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.674117 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.674271 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.674423 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.674624 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:11Z","lastTransitionTime":"2025-12-05T19:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.777647 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.777694 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.777710 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.777731 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.777747 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:11Z","lastTransitionTime":"2025-12-05T19:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.880868 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.880992 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.881017 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.881046 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.881069 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:11Z","lastTransitionTime":"2025-12-05T19:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.984321 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.984374 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.984388 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.984407 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:11 crc kubenswrapper[4828]: I1205 19:05:11.984422 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:11Z","lastTransitionTime":"2025-12-05T19:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.087964 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.088443 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.088606 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.088772 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.089034 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:12Z","lastTransitionTime":"2025-12-05T19:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.191491 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.191553 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.191568 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.191588 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.191605 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:12Z","lastTransitionTime":"2025-12-05T19:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.294324 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.294393 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.294419 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.294447 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.294468 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:12Z","lastTransitionTime":"2025-12-05T19:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.398054 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.399013 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.399169 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.399339 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.399486 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:12Z","lastTransitionTime":"2025-12-05T19:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.446250 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.446595 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.446428 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.446365 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:12 crc kubenswrapper[4828]: E1205 19:05:12.447562 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:05:12 crc kubenswrapper[4828]: E1205 19:05:12.447873 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:05:12 crc kubenswrapper[4828]: E1205 19:05:12.448244 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:05:12 crc kubenswrapper[4828]: E1205 19:05:12.448326 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.467060 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.484969 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.501507 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c81d9475b5e3b88e21ba70262d2c74c28a1907cf0241e00ce4eb57a70385e706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.502809 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.502897 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.502913 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.502933 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.503362 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:12Z","lastTransitionTime":"2025-12-05T19:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.516362 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac03ea80-7eac-4147-99e2-7e71ce2d445d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5897df04b9f5ae0fe2d732c74d60c0e3c1c1aecf6fd21dbb3b43dd0f374b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a7d2eb47db1c4257460e84470c6aa096d27899281a73bce5247c7c3b259c183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9c0dadb3b4f125469c4dec525da5f9054191054b32cc0bc7a5b71fad50a494b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cb8e008d76531413d4ec31bfdf79f0fb87a654388f5189ac4e10d3c48d4bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cb8e008d76531413d4ec31bfdf79f0fb87a654388f5189ac4e10d3c48d4bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.536048 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.547308 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvf6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0595333b-a181-4a2b-90b8-e2accf80e78e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvf6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.557480 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd5ce72f-0011-47fc-87b0-e5d24dd07faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc52dbbcfbf16aff6f2984f391cddea4e4e04ce4b402a312f47cf5c0840f119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dbae45d5825575b4ee70c74e486cf3da64b0da3d60feb158c8ad3b77749f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d7dbae45d5825575b4ee70c74e486cf3da64b0da3d60feb158c8ad3b77749f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.576166 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:05:00Z\\\",\\\"message\\\":\\\"/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1205 19:05:00.224392 6851 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1205 19:05:00.224895 6851 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tzshq_openshift-ovn-kubernetes(1be569ff-0725-412f-ac1a-da4f5077bc17)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.585263 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.595303 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.605581 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.605845 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.605960 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.606067 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.606195 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:12Z","lastTransitionTime":"2025-12-05T19:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.607204 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.619490 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44935bbd-b8fe-44ed-93ac-86eed967e178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fcf44679b22e61763bd01648f92ef1e645b40edd18e5f5f1d577bdd75952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1717f608fb766a765e800128eb9bc99275b75519406b39bf88eb811ea1ea78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dthbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.629542 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.642567 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.653313 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.668438 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.679468 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836afc5e512e0143f7845dcdb8e4ca67de1b0558e78ff4e96b2674810b4152d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:50Z\\\",\\\"message\\\":\\\"2025-12-05T19:04:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e3d10fbb-6acb-4f8e-972d-48658f4f16df\\\\n2025-12-05T19:04:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e3d10fbb-6acb-4f8e-972d-48658f4f16df to /host/opt/cni/bin/\\\\n2025-12-05T19:04:05Z [verbose] multus-daemon started\\\\n2025-12-05T19:04:05Z [verbose] Readiness Indicator file check\\\\n2025-12-05T19:04:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.696656 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.707749 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:12Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.708581 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.708606 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.708615 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.708627 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.708636 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:12Z","lastTransitionTime":"2025-12-05T19:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.810861 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.811096 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.811162 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.811227 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.811344 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:12Z","lastTransitionTime":"2025-12-05T19:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.913614 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.913645 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.913653 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.913667 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:12 crc kubenswrapper[4828]: I1205 19:05:12.913676 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:12Z","lastTransitionTime":"2025-12-05T19:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.016411 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.016493 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.016513 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.016539 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.016558 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:13Z","lastTransitionTime":"2025-12-05T19:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.119470 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.119530 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.119541 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.119558 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.119570 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:13Z","lastTransitionTime":"2025-12-05T19:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.222438 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.222499 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.222516 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.222536 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.222553 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:13Z","lastTransitionTime":"2025-12-05T19:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.325240 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.325305 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.325322 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.325347 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.325364 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:13Z","lastTransitionTime":"2025-12-05T19:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.428717 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.428762 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.428774 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.428791 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.428802 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:13Z","lastTransitionTime":"2025-12-05T19:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.531388 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.531445 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.531468 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.531498 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.531520 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:13Z","lastTransitionTime":"2025-12-05T19:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.635046 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.635124 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.635146 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.635174 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.635195 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:13Z","lastTransitionTime":"2025-12-05T19:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.738317 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.738390 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.738418 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.738450 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.738474 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:13Z","lastTransitionTime":"2025-12-05T19:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.841371 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.841409 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.841418 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.841432 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.841441 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:13Z","lastTransitionTime":"2025-12-05T19:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.944451 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.944500 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.944511 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.944527 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:13 crc kubenswrapper[4828]: I1205 19:05:13.944541 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:13Z","lastTransitionTime":"2025-12-05T19:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.047337 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.047372 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.047381 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.047393 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.047402 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:14Z","lastTransitionTime":"2025-12-05T19:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.150585 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.150622 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.150631 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.150649 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.150661 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:14Z","lastTransitionTime":"2025-12-05T19:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.254688 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.254763 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.254786 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.254814 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.254896 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:14Z","lastTransitionTime":"2025-12-05T19:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.358401 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.358478 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.358498 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.358559 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.358582 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:14Z","lastTransitionTime":"2025-12-05T19:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.446486 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.446523 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.446584 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:14 crc kubenswrapper[4828]: E1205 19:05:14.446948 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:05:14 crc kubenswrapper[4828]: E1205 19:05:14.447238 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.447252 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:14 crc kubenswrapper[4828]: E1205 19:05:14.447584 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:05:14 crc kubenswrapper[4828]: E1205 19:05:14.447435 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.461620 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.461670 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.461688 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.461710 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.461729 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:14Z","lastTransitionTime":"2025-12-05T19:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.564932 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.564979 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.564991 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.565004 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.565014 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:14Z","lastTransitionTime":"2025-12-05T19:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.668625 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.668711 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.668732 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.668757 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.668776 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:14Z","lastTransitionTime":"2025-12-05T19:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.771793 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.771899 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.771917 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.771941 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.771959 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:14Z","lastTransitionTime":"2025-12-05T19:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.874922 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.875007 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.875031 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.875068 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.875091 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:14Z","lastTransitionTime":"2025-12-05T19:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.977577 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.977615 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.977623 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.977637 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:14 crc kubenswrapper[4828]: I1205 19:05:14.977672 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:14Z","lastTransitionTime":"2025-12-05T19:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.080519 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.080590 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.080614 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.080641 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.080658 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:15Z","lastTransitionTime":"2025-12-05T19:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.184676 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.184741 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.184762 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.184794 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.184818 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:15Z","lastTransitionTime":"2025-12-05T19:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.251248 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.251314 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.251331 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.251355 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.251375 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:15Z","lastTransitionTime":"2025-12-05T19:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:15 crc kubenswrapper[4828]: E1205 19:05:15.271222 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.278971 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.279070 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.279091 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.279120 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.279189 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:15Z","lastTransitionTime":"2025-12-05T19:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:15 crc kubenswrapper[4828]: E1205 19:05:15.299768 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.305218 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.305282 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.305301 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.305329 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.305352 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:15Z","lastTransitionTime":"2025-12-05T19:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:15 crc kubenswrapper[4828]: E1205 19:05:15.325774 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.330655 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.330732 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.330747 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.330765 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.330779 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:15Z","lastTransitionTime":"2025-12-05T19:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:15 crc kubenswrapper[4828]: E1205 19:05:15.349015 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.352910 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.352966 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.352985 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.353010 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.353027 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:15Z","lastTransitionTime":"2025-12-05T19:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:15 crc kubenswrapper[4828]: E1205 19:05:15.372023 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:15Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:15 crc kubenswrapper[4828]: E1205 19:05:15.372244 4828 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.373811 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.373909 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.373931 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.373959 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.373976 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:15Z","lastTransitionTime":"2025-12-05T19:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.447490 4828 scope.go:117] "RemoveContainer" containerID="b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd" Dec 05 19:05:15 crc kubenswrapper[4828]: E1205 19:05:15.447766 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tzshq_openshift-ovn-kubernetes(1be569ff-0725-412f-ac1a-da4f5077bc17)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.476577 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.476652 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.476675 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.476702 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.476723 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:15Z","lastTransitionTime":"2025-12-05T19:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.579148 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.579262 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.579296 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.579325 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.579348 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:15Z","lastTransitionTime":"2025-12-05T19:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.682578 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.682735 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.682816 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.682891 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.682913 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:15Z","lastTransitionTime":"2025-12-05T19:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.786194 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.786257 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.786275 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.786297 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.786315 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:15Z","lastTransitionTime":"2025-12-05T19:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.888939 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.888993 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.889004 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.889020 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.889030 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:15Z","lastTransitionTime":"2025-12-05T19:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.992329 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.992417 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.992435 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.992458 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:15 crc kubenswrapper[4828]: I1205 19:05:15.992476 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:15Z","lastTransitionTime":"2025-12-05T19:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.094877 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.094934 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.094948 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.094966 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.094978 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:16Z","lastTransitionTime":"2025-12-05T19:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.197177 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.197235 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.197254 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.197272 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.197283 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:16Z","lastTransitionTime":"2025-12-05T19:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.300241 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.300295 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.300306 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.300339 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.300361 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:16Z","lastTransitionTime":"2025-12-05T19:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.403289 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.403336 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.403345 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.403362 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.403374 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:16Z","lastTransitionTime":"2025-12-05T19:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.446400 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.446455 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.446503 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.446546 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:16 crc kubenswrapper[4828]: E1205 19:05:16.446711 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:05:16 crc kubenswrapper[4828]: E1205 19:05:16.446921 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:05:16 crc kubenswrapper[4828]: E1205 19:05:16.447432 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:05:16 crc kubenswrapper[4828]: E1205 19:05:16.447494 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.506920 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.506987 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.506999 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.507018 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.507030 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:16Z","lastTransitionTime":"2025-12-05T19:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.610540 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.610600 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.610649 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.610678 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.610698 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:16Z","lastTransitionTime":"2025-12-05T19:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.714429 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.714492 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.714510 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.714536 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.714558 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:16Z","lastTransitionTime":"2025-12-05T19:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.817419 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.817467 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.817527 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.817545 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.817555 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:16Z","lastTransitionTime":"2025-12-05T19:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.920532 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.920607 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.920627 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.920652 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:16 crc kubenswrapper[4828]: I1205 19:05:16.920670 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:16Z","lastTransitionTime":"2025-12-05T19:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.023982 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.024042 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.024061 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.024084 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.024101 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:17Z","lastTransitionTime":"2025-12-05T19:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.126899 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.126964 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.126982 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.127009 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.127027 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:17Z","lastTransitionTime":"2025-12-05T19:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.229899 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.229956 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.229968 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.229988 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.230002 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:17Z","lastTransitionTime":"2025-12-05T19:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.332477 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.332554 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.332568 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.332588 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.332603 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:17Z","lastTransitionTime":"2025-12-05T19:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.435517 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.435709 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.435740 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.435771 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.435794 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:17Z","lastTransitionTime":"2025-12-05T19:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.538719 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.538884 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.538912 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.538942 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.538965 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:17Z","lastTransitionTime":"2025-12-05T19:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.642424 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.642459 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.642466 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.642480 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.642488 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:17Z","lastTransitionTime":"2025-12-05T19:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.745731 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.745779 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.745789 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.745802 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.745811 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:17Z","lastTransitionTime":"2025-12-05T19:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.847950 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.848004 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.848019 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.848040 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.848055 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:17Z","lastTransitionTime":"2025-12-05T19:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.950928 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.951019 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.951042 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.951072 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:17 crc kubenswrapper[4828]: I1205 19:05:17.951092 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:17Z","lastTransitionTime":"2025-12-05T19:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.054583 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.054632 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.054644 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.054660 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.054673 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:18Z","lastTransitionTime":"2025-12-05T19:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.158140 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.158200 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.158216 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.158240 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.158260 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:18Z","lastTransitionTime":"2025-12-05T19:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.261149 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.261203 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.261215 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.261233 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.261248 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:18Z","lastTransitionTime":"2025-12-05T19:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.363346 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.363642 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.363772 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.363904 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.364013 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:18Z","lastTransitionTime":"2025-12-05T19:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.446509 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.446552 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.446564 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.446624 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:18 crc kubenswrapper[4828]: E1205 19:05:18.446660 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:05:18 crc kubenswrapper[4828]: E1205 19:05:18.446818 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:05:18 crc kubenswrapper[4828]: E1205 19:05:18.446887 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:05:18 crc kubenswrapper[4828]: E1205 19:05:18.446940 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.466358 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.466445 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.466485 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.466521 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.466546 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:18Z","lastTransitionTime":"2025-12-05T19:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.569037 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.569076 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.569087 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.569102 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.569113 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:18Z","lastTransitionTime":"2025-12-05T19:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.671675 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.671734 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.671745 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.671761 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.671776 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:18Z","lastTransitionTime":"2025-12-05T19:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.774874 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.774959 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.774976 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.774999 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.775020 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:18Z","lastTransitionTime":"2025-12-05T19:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.878082 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.878616 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.878645 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.878668 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.878686 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:18Z","lastTransitionTime":"2025-12-05T19:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.981652 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.981696 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.981707 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.981723 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:18 crc kubenswrapper[4828]: I1205 19:05:18.981736 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:18Z","lastTransitionTime":"2025-12-05T19:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.085038 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.085082 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.085092 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.085107 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.085117 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:19Z","lastTransitionTime":"2025-12-05T19:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.187016 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.187099 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.187121 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.187151 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.187171 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:19Z","lastTransitionTime":"2025-12-05T19:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.290388 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.290487 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.290536 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.290567 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.290615 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:19Z","lastTransitionTime":"2025-12-05T19:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.394065 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.394143 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.394163 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.394194 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.394213 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:19Z","lastTransitionTime":"2025-12-05T19:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.497074 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.497117 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.497125 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.497137 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.497145 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:19Z","lastTransitionTime":"2025-12-05T19:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.599719 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.599757 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.599773 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.599791 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.599802 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:19Z","lastTransitionTime":"2025-12-05T19:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.702343 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.702400 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.702421 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.702446 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.702466 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:19Z","lastTransitionTime":"2025-12-05T19:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.805553 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.805623 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.805649 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.805666 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.805680 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:19Z","lastTransitionTime":"2025-12-05T19:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.908275 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.908318 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.908326 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.908338 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:19 crc kubenswrapper[4828]: I1205 19:05:19.908346 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:19Z","lastTransitionTime":"2025-12-05T19:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.011026 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.011061 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.011071 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.011084 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.011093 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:20Z","lastTransitionTime":"2025-12-05T19:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.115139 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.115207 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.115225 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.115248 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.115267 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:20Z","lastTransitionTime":"2025-12-05T19:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.217604 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.217650 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.217665 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.217687 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.217702 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:20Z","lastTransitionTime":"2025-12-05T19:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.319908 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.319945 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.319956 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.319974 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.319986 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:20Z","lastTransitionTime":"2025-12-05T19:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.422047 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.422098 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.422109 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.422128 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.422139 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:20Z","lastTransitionTime":"2025-12-05T19:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.445594 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:20 crc kubenswrapper[4828]: E1205 19:05:20.446177 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.446258 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.446296 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:20 crc kubenswrapper[4828]: E1205 19:05:20.446869 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:05:20 crc kubenswrapper[4828]: E1205 19:05:20.446957 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.447388 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:20 crc kubenswrapper[4828]: E1205 19:05:20.447589 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.525932 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.526112 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.526138 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.526164 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.526182 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:20Z","lastTransitionTime":"2025-12-05T19:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.629030 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.629098 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.629121 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.629149 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.629172 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:20Z","lastTransitionTime":"2025-12-05T19:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.731323 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.731405 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.731417 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.731435 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.731447 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:20Z","lastTransitionTime":"2025-12-05T19:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.834088 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.834140 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.834153 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.834169 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.834179 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:20Z","lastTransitionTime":"2025-12-05T19:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.898096 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0595333b-a181-4a2b-90b8-e2accf80e78e-metrics-certs\") pod \"network-metrics-daemon-bvf6n\" (UID: \"0595333b-a181-4a2b-90b8-e2accf80e78e\") " pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:20 crc kubenswrapper[4828]: E1205 19:05:20.898242 4828 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 19:05:20 crc kubenswrapper[4828]: E1205 19:05:20.898293 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0595333b-a181-4a2b-90b8-e2accf80e78e-metrics-certs podName:0595333b-a181-4a2b-90b8-e2accf80e78e nodeName:}" failed. No retries permitted until 2025-12-05 19:06:24.898278575 +0000 UTC m=+162.793500881 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0595333b-a181-4a2b-90b8-e2accf80e78e-metrics-certs") pod "network-metrics-daemon-bvf6n" (UID: "0595333b-a181-4a2b-90b8-e2accf80e78e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.937441 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.937538 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.937587 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.937610 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:20 crc kubenswrapper[4828]: I1205 19:05:20.937628 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:20Z","lastTransitionTime":"2025-12-05T19:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.041720 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.041802 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.041888 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.041917 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.041995 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:21Z","lastTransitionTime":"2025-12-05T19:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.144970 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.145046 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.145067 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.145098 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.145120 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:21Z","lastTransitionTime":"2025-12-05T19:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.248547 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.248590 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.248599 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.248618 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.248628 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:21Z","lastTransitionTime":"2025-12-05T19:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.351190 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.351261 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.351280 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.351306 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.351330 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:21Z","lastTransitionTime":"2025-12-05T19:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.454789 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.455299 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.455395 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.455499 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.455580 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:21Z","lastTransitionTime":"2025-12-05T19:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.559396 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.559795 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.560011 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.560174 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.560316 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:21Z","lastTransitionTime":"2025-12-05T19:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.664218 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.664289 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.664307 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.664330 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.664348 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:21Z","lastTransitionTime":"2025-12-05T19:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.767527 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.767573 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.767583 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.767598 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.767611 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:21Z","lastTransitionTime":"2025-12-05T19:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.870302 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.870394 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.870412 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.870434 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.870450 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:21Z","lastTransitionTime":"2025-12-05T19:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.973264 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.973575 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.973814 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.974096 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:21 crc kubenswrapper[4828]: I1205 19:05:21.974301 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:21Z","lastTransitionTime":"2025-12-05T19:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.077124 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.077186 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.077206 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.077232 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.077252 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:22Z","lastTransitionTime":"2025-12-05T19:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.180259 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.180347 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.180385 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.180412 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.180431 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:22Z","lastTransitionTime":"2025-12-05T19:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.283713 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.283767 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.283783 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.283804 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.283850 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:22Z","lastTransitionTime":"2025-12-05T19:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.386390 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.386470 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.386493 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.386523 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.386542 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:22Z","lastTransitionTime":"2025-12-05T19:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.445446 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.445516 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:22 crc kubenswrapper[4828]: E1205 19:05:22.445628 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.445708 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.445878 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:22 crc kubenswrapper[4828]: E1205 19:05:22.446066 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:05:22 crc kubenswrapper[4828]: E1205 19:05:22.446271 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:05:22 crc kubenswrapper[4828]: E1205 19:05:22.446396 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.465157 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd5ce72f-0011-47fc-87b0-e5d24dd07faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc52dbbcfbf16aff6f2984f391cddea4e4e04ce4b402a312f47cf5c0840f119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d7dbae45d5825575b4ee70c74e486cf3da64b0da3d60feb158c8ad3b77749f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d7dbae45d5825575b4ee70c74e486cf3da64b0da3d60feb158c8ad3b77749f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.490183 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.490253 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.490272 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.490297 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.490314 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:22Z","lastTransitionTime":"2025-12-05T19:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.497425 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be569ff-0725-412f-ac1a-da4f5077bc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:05:00Z\\\",\\\"message\\\":\\\"/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1205 19:05:00.224392 6851 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1205 19:05:00.224895 6851 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tzshq_openshift-ovn-kubernetes(1be569ff-0725-412f-ac1a-da4f5077bc17)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24dk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tzshq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.514653 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bvf6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0595333b-a181-4a2b-90b8-e2accf80e78e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4bxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bvf6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.536242 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99096f90-c645-4357-aaa2-bd064b3983ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73850e964af59ac58ce86ba0ab02384b640224b8c486d8a9b14402444e2406a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd852578d77cb764975f438d3fcd2eccc9ccd7c45243ee28e296bad8fda586e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4662b1a04b814dc870b00f00f38d9380d3df1db2539953e2d1558abbcdafb7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.557538 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562eb7597a1fcdefb67657d35922e1557ca975572605ea9b949b0c7d8b4669f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.574252 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nm8v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9d9c5b-3bb6-4341-a670-8dec89ab476e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02dd2ae734479660011418b998ced87a23955e6ae02e0923671d259c2588d2ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nm8v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.586629 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-phlsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a660bd9-b4aa-4858-89e9-52a3782162d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b43b1e3d8223be91e1a1e159e56268193a1f3f55c545fb1a2641f2e02e51a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgm84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-phlsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.593110 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.593181 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.593200 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.593227 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.593247 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:22Z","lastTransitionTime":"2025-12-05T19:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.603970 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a74199c1-79be-49b4-9c04-fdb48847c85e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://365174ba4a99a3843fa5f5da00addeb8c6d484addaa544062d49266128888627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65pzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nlqsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.621138 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44935bbd-b8fe-44ed-93ac-86eed967e178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9fcf44679b22e61763bd01648f92ef1e645b40edd18e5f5f1d577bdd75952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1717f608fb766a765e800128eb9bc99275b75519406b39bf88eb811ea1ea78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjq74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dthbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.644236 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d58c4084-ad67-4ee8-8f79-031ebb86b9db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e32014f3bd65ea67a160cfbc4fdb6af03280d44beab06e5f4525ac2e52229e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0458fd836b319867af078ba03aeac3700fc88758be319f759921b8b43eb04fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9decc4a31e121c22bd0a201ff65ffb2d7d915b29ea89f6f679f62f6d77d2137e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a931396b9c2b52d656f7777e6585b78647bd88c492282dba54b1679693b12aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a14c954f54641fd36c68e9cb5461cbb7b6e0add8f471707b78d2af62162490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b65175c589f847f514991d4f24392a97cbafc7542f951a2a905b7c3f221a87eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2abb55c929d4f10e4de2ba0059b2014f0dc842aa740371edeeaf823c837dd6ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8b352efcc547a9d638582bcd0e978608328a42fe004104a374943cf1a63b339\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.663992 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1662aeae-9ff0-4304-8fc5-c957c2ae9f39\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 19:04:00.085767 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 19:04:00.086048 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 19:04:00.087949 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1882962317/tls.crt::/tmp/serving-cert-1882962317/tls.key\\\\\\\"\\\\nI1205 19:04:00.444715 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 19:04:00.450867 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 19:04:00.451344 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 19:04:00.451382 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 19:04:00.451387 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 19:04:00.455676 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 19:04:00.455694 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 19:04:00.455703 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455709 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 19:04:00.455714 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 19:04:00.455717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 19:04:00.455721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 19:04:00.455724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 19:04:00.457299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.686018 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.696568 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.696598 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.696607 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.696623 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.696634 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:22Z","lastTransitionTime":"2025-12-05T19:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.701405 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.716454 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ksv4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e927a669-7d9d-442a-b020-339804e95af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836afc5e512e0143f7845dcdb8e4ca67de1b0558e78ff4e96b2674810b4152d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T19:04:50Z\\\",\\\"message\\\":\\\"2025-12-05T19:04:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e3d10fbb-6acb-4f8e-972d-48658f4f16df\\\\n2025-12-05T19:04:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e3d10fbb-6acb-4f8e-972d-48658f4f16df to /host/opt/cni/bin/\\\\n2025-12-05T19:04:05Z [verbose] multus-daemon started\\\\n2025-12-05T19:04:05Z [verbose] Readiness Indicator file check\\\\n2025-12-05T19:04:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcjjh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ksv4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.733948 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac03ea80-7eac-4147-99e2-7e71ce2d445d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5897df04b9f5ae0fe2d732c74d60c0e3c1c1aecf6fd21dbb3b43dd0f374b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a7d2eb47db1c4257460e84470c6aa096d27899281a73bce5247c7c3b259c183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9c0dadb3b4f125469c4dec525da5f9054191054b32cc0bc7a5b71fad50a494b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cb8e008d76531413d4ec31bfdf79f0fb87a654388f5189ac4e10d3c48d4bdcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cb8e008d76531413d4ec31bfdf79f0fb87a654388f5189ac4e10d3c48d4bdcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:03:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:03:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.747525 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60092b3a5c5ef93bf298707d7ec72dd7449ce243ca15db184e3d4b2be92edd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b0022437b3c632b627b3d840af49b8b2da4c9cf94e84cf6a1795ea5bc1d034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.761417 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.771661 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b91f48b614796c0eeb3f1451a145a147d0eae29a4e95a614a596f36fb707f705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.786839 4828 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnk88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ae83517-5582-40f0-8f8c-f61e17a0b812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T19:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c81d9475b5e3b88e21ba70262d2c74c28a1907cf0241e00ce4eb57a70385e706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T19:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4158101822f3f063216646137a28350299c6fecdd0a6a3c71c853092992d8872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b035d6d0334bb0594faaa405ccbd4f22a4a04fe8e87560c32c7bf941c4859286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395c8b0992b0406395bd791b0610a3e1fa62e3693485490e38e113564137b495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53113c46f2b0eec298bd7173fd71049be46a2ad713df2a24fcef1d6e041d8f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff494aed4e621159dba3a11f140a23ab6cf512f3265115a6eabf4212aa8a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f07e45b727620a72e5f3e4b6d333085f76141d72b0a857b01c9de9db5ec82c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T19:04:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T19:04:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nlt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T19:04:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnk88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:22Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.800137 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.800168 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.800199 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.800214 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.800224 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:22Z","lastTransitionTime":"2025-12-05T19:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.902437 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.902687 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.902844 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.902959 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:22 crc kubenswrapper[4828]: I1205 19:05:22.903070 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:22Z","lastTransitionTime":"2025-12-05T19:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.005653 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.006039 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.006129 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.006241 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.006737 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:23Z","lastTransitionTime":"2025-12-05T19:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.110238 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.110524 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.110714 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.110910 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.111072 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:23Z","lastTransitionTime":"2025-12-05T19:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.214643 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.214700 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.214717 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.214743 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.214759 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:23Z","lastTransitionTime":"2025-12-05T19:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.319498 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.319553 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.319569 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.319592 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.319612 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:23Z","lastTransitionTime":"2025-12-05T19:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.421944 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.422002 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.422018 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.422039 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.422056 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:23Z","lastTransitionTime":"2025-12-05T19:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.525575 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.525639 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.525663 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.525691 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.525714 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:23Z","lastTransitionTime":"2025-12-05T19:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.629619 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.629713 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.629747 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.629778 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.629802 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:23Z","lastTransitionTime":"2025-12-05T19:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.734064 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.734139 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.734163 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.734189 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.734207 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:23Z","lastTransitionTime":"2025-12-05T19:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.836654 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.836722 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.836741 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.836765 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.836782 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:23Z","lastTransitionTime":"2025-12-05T19:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.940377 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.940472 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.940498 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.940529 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:23 crc kubenswrapper[4828]: I1205 19:05:23.940552 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:23Z","lastTransitionTime":"2025-12-05T19:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.043798 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.043902 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.043925 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.043953 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.043974 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:24Z","lastTransitionTime":"2025-12-05T19:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.146504 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.146550 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.146561 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.146576 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.146585 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:24Z","lastTransitionTime":"2025-12-05T19:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.248859 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.248930 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.248956 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.248987 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.249005 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:24Z","lastTransitionTime":"2025-12-05T19:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.352677 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.352764 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.352783 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.352807 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.352861 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:24Z","lastTransitionTime":"2025-12-05T19:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.446482 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.446504 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.446538 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:24 crc kubenswrapper[4828]: E1205 19:05:24.447148 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.446697 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:24 crc kubenswrapper[4828]: E1205 19:05:24.447267 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:05:24 crc kubenswrapper[4828]: E1205 19:05:24.447421 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:05:24 crc kubenswrapper[4828]: E1205 19:05:24.447772 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.455409 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.455482 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.455504 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.455534 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.455556 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:24Z","lastTransitionTime":"2025-12-05T19:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.558359 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.558409 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.558424 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.558445 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.558459 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:24Z","lastTransitionTime":"2025-12-05T19:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.661276 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.661365 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.661391 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.661422 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.661446 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:24Z","lastTransitionTime":"2025-12-05T19:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.764097 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.764157 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.764175 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.764204 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.764227 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:24Z","lastTransitionTime":"2025-12-05T19:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.866548 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.866625 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.866636 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.866649 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.866659 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:24Z","lastTransitionTime":"2025-12-05T19:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.969963 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.970007 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.970019 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.970035 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:24 crc kubenswrapper[4828]: I1205 19:05:24.970045 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:24Z","lastTransitionTime":"2025-12-05T19:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.072196 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.072235 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.072246 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.072264 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.072277 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:25Z","lastTransitionTime":"2025-12-05T19:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.175210 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.175261 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.175278 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.175302 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.175320 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:25Z","lastTransitionTime":"2025-12-05T19:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.277766 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.277846 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.277863 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.277892 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.277908 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:25Z","lastTransitionTime":"2025-12-05T19:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.384783 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.384953 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.384984 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.385036 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.385061 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:25Z","lastTransitionTime":"2025-12-05T19:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.488986 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.489064 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.489086 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.489116 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.489138 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:25Z","lastTransitionTime":"2025-12-05T19:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.591967 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.592357 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.592492 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.592617 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.592760 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:25Z","lastTransitionTime":"2025-12-05T19:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.695651 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.695708 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.695724 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.695747 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.695764 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:25Z","lastTransitionTime":"2025-12-05T19:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.711599 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.711662 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.711696 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.711724 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.711746 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:25Z","lastTransitionTime":"2025-12-05T19:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:25 crc kubenswrapper[4828]: E1205 19:05:25.732869 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:25Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.739624 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.739691 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.739709 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.739734 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.739752 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:25Z","lastTransitionTime":"2025-12-05T19:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:25 crc kubenswrapper[4828]: E1205 19:05:25.763285 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:25Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.768555 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.768615 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.768651 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.768681 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.768706 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:25Z","lastTransitionTime":"2025-12-05T19:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:25 crc kubenswrapper[4828]: E1205 19:05:25.789903 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:25Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.794708 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.794779 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.794804 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.794868 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.794894 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:25Z","lastTransitionTime":"2025-12-05T19:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:25 crc kubenswrapper[4828]: E1205 19:05:25.810781 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:25Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.815496 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.815564 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.815588 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.815625 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.815648 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:25Z","lastTransitionTime":"2025-12-05T19:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:25 crc kubenswrapper[4828]: E1205 19:05:25.831450 4828 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T19:05:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef06a32a-0efd-4207-a13f-220645e4e6a2\\\",\\\"systemUUID\\\":\\\"76f17a1f-8558-4034-87e9-acb0adb87b21\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T19:05:25Z is after 2025-08-24T17:21:41Z" Dec 05 19:05:25 crc kubenswrapper[4828]: E1205 19:05:25.831693 4828 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.833742 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.833788 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.833804 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.833855 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.833872 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:25Z","lastTransitionTime":"2025-12-05T19:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.936327 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.936389 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.936407 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.936429 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:25 crc kubenswrapper[4828]: I1205 19:05:25.936445 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:25Z","lastTransitionTime":"2025-12-05T19:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.039550 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.039618 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.039641 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.039671 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.039694 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:26Z","lastTransitionTime":"2025-12-05T19:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.143396 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.143464 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.143484 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.143509 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.143529 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:26Z","lastTransitionTime":"2025-12-05T19:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.246279 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.246324 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.246339 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.246360 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.246373 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:26Z","lastTransitionTime":"2025-12-05T19:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.348553 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.348613 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.348631 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.348653 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.348669 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:26Z","lastTransitionTime":"2025-12-05T19:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.445923 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.445974 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.446175 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:26 crc kubenswrapper[4828]: E1205 19:05:26.446173 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:05:26 crc kubenswrapper[4828]: E1205 19:05:26.446367 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:05:26 crc kubenswrapper[4828]: E1205 19:05:26.446500 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.446772 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:26 crc kubenswrapper[4828]: E1205 19:05:26.447009 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.451384 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.451432 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.451668 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.451740 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.451766 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:26Z","lastTransitionTime":"2025-12-05T19:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.554741 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.554814 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.554875 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.554916 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.554948 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:26Z","lastTransitionTime":"2025-12-05T19:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.660301 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.660364 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.660438 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.660468 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.660500 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:26Z","lastTransitionTime":"2025-12-05T19:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.763447 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.763490 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.763500 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.763534 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.763546 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:26Z","lastTransitionTime":"2025-12-05T19:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.866233 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.866282 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.866300 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.866321 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.866338 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:26Z","lastTransitionTime":"2025-12-05T19:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.969308 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.969351 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.969362 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.969377 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:26 crc kubenswrapper[4828]: I1205 19:05:26.969391 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:26Z","lastTransitionTime":"2025-12-05T19:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.072167 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.072210 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.072222 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.072239 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.072251 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:27Z","lastTransitionTime":"2025-12-05T19:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.174515 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.174546 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.174557 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.174571 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.174579 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:27Z","lastTransitionTime":"2025-12-05T19:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.282505 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.282564 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.282582 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.282606 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.282624 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:27Z","lastTransitionTime":"2025-12-05T19:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.386099 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.386164 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.386186 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.386215 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.386237 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:27Z","lastTransitionTime":"2025-12-05T19:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.489019 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.489068 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.489083 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.489104 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.489121 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:27Z","lastTransitionTime":"2025-12-05T19:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.591747 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.591813 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.591880 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.591907 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.591924 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:27Z","lastTransitionTime":"2025-12-05T19:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.694774 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.694876 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.694896 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.694918 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.694934 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:27Z","lastTransitionTime":"2025-12-05T19:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.798183 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.798265 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.798305 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.798335 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.798359 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:27Z","lastTransitionTime":"2025-12-05T19:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.901513 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.901569 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.901587 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.901611 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:27 crc kubenswrapper[4828]: I1205 19:05:27.901632 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:27Z","lastTransitionTime":"2025-12-05T19:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.003497 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.003541 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.003551 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.003568 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.003580 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:28Z","lastTransitionTime":"2025-12-05T19:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.105462 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.105494 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.105501 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.105514 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.105526 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:28Z","lastTransitionTime":"2025-12-05T19:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.208349 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.208396 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.208409 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.208429 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.208441 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:28Z","lastTransitionTime":"2025-12-05T19:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.310726 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.310769 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.310780 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.310796 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.310809 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:28Z","lastTransitionTime":"2025-12-05T19:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.413909 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.413961 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.413973 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.413992 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.414009 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:28Z","lastTransitionTime":"2025-12-05T19:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.446300 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.446379 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.446453 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:28 crc kubenswrapper[4828]: E1205 19:05:28.446466 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:05:28 crc kubenswrapper[4828]: E1205 19:05:28.446586 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.447047 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:28 crc kubenswrapper[4828]: E1205 19:05:28.447148 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:05:28 crc kubenswrapper[4828]: E1205 19:05:28.447244 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.447651 4828 scope.go:117] "RemoveContainer" containerID="b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd" Dec 05 19:05:28 crc kubenswrapper[4828]: E1205 19:05:28.447930 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tzshq_openshift-ovn-kubernetes(1be569ff-0725-412f-ac1a-da4f5077bc17)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.517134 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.517195 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.517213 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.517235 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.517251 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:28Z","lastTransitionTime":"2025-12-05T19:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.620499 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.620570 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.620592 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.620621 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.620642 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:28Z","lastTransitionTime":"2025-12-05T19:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.723141 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.723177 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.723186 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.723199 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.723208 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:28Z","lastTransitionTime":"2025-12-05T19:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.826686 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.826749 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.826766 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.826787 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.826879 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:28Z","lastTransitionTime":"2025-12-05T19:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.930107 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.930175 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.930194 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.930224 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:28 crc kubenswrapper[4828]: I1205 19:05:28.930243 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:28Z","lastTransitionTime":"2025-12-05T19:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.033274 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.033352 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.033369 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.033395 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.033414 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:29Z","lastTransitionTime":"2025-12-05T19:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.136911 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.136955 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.136964 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.136981 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.136990 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:29Z","lastTransitionTime":"2025-12-05T19:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.239539 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.239597 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.239617 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.239639 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.239656 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:29Z","lastTransitionTime":"2025-12-05T19:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.343477 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.343515 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.343523 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.343537 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.343545 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:29Z","lastTransitionTime":"2025-12-05T19:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.446342 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.446378 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.446391 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.446407 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.446418 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:29Z","lastTransitionTime":"2025-12-05T19:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.548409 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.548451 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.548474 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.548493 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.548545 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:29Z","lastTransitionTime":"2025-12-05T19:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.651134 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.651197 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.651209 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.651225 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.651237 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:29Z","lastTransitionTime":"2025-12-05T19:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.754108 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.754202 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.754237 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.754278 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.754310 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:29Z","lastTransitionTime":"2025-12-05T19:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.857530 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.857585 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.857601 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.857621 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.857637 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:29Z","lastTransitionTime":"2025-12-05T19:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.960298 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.960416 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.960440 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.960503 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:29 crc kubenswrapper[4828]: I1205 19:05:29.960525 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:29Z","lastTransitionTime":"2025-12-05T19:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.064113 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.064162 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.064173 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.064189 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.064198 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:30Z","lastTransitionTime":"2025-12-05T19:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.166615 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.166664 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.166679 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.166695 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.166708 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:30Z","lastTransitionTime":"2025-12-05T19:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.269370 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.269407 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.269416 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.269430 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.269439 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:30Z","lastTransitionTime":"2025-12-05T19:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.372141 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.372208 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.372227 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.372254 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.372277 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:30Z","lastTransitionTime":"2025-12-05T19:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.446236 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.446366 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.446530 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.446545 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:30 crc kubenswrapper[4828]: E1205 19:05:30.446655 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:05:30 crc kubenswrapper[4828]: E1205 19:05:30.446884 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:05:30 crc kubenswrapper[4828]: E1205 19:05:30.447043 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:05:30 crc kubenswrapper[4828]: E1205 19:05:30.447203 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.474999 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.475063 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.475084 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.475110 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.475133 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:30Z","lastTransitionTime":"2025-12-05T19:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.578315 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.578369 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.578386 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.578408 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.578425 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:30Z","lastTransitionTime":"2025-12-05T19:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.681768 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.681885 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.681908 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.681936 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.681991 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:30Z","lastTransitionTime":"2025-12-05T19:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.784783 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.784818 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.784845 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.784860 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.784869 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:30Z","lastTransitionTime":"2025-12-05T19:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.888042 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.888081 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.888095 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.888113 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.888129 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:30Z","lastTransitionTime":"2025-12-05T19:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.991555 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.991635 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.991662 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.991691 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:30 crc kubenswrapper[4828]: I1205 19:05:30.991715 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:30Z","lastTransitionTime":"2025-12-05T19:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.095486 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.095585 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.095611 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.095644 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.095667 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:31Z","lastTransitionTime":"2025-12-05T19:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.198163 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.198222 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.198239 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.198265 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.198282 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:31Z","lastTransitionTime":"2025-12-05T19:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.301328 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.301370 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.301381 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.301395 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.301406 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:31Z","lastTransitionTime":"2025-12-05T19:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.404083 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.404156 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.404177 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.404207 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.404229 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:31Z","lastTransitionTime":"2025-12-05T19:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.506929 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.506960 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.506969 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.506983 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.506992 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:31Z","lastTransitionTime":"2025-12-05T19:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.609647 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.609709 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.609726 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.609749 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.609769 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:31Z","lastTransitionTime":"2025-12-05T19:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.712744 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.712807 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.712845 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.712867 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.712881 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:31Z","lastTransitionTime":"2025-12-05T19:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.815887 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.815935 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.815949 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.815967 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.815979 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:31Z","lastTransitionTime":"2025-12-05T19:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.919421 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.919486 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.919507 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.919534 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:31 crc kubenswrapper[4828]: I1205 19:05:31.919556 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:31Z","lastTransitionTime":"2025-12-05T19:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.023341 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.023422 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.023436 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.023453 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.023465 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:32Z","lastTransitionTime":"2025-12-05T19:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.127450 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.127560 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.127579 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.127605 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.127621 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:32Z","lastTransitionTime":"2025-12-05T19:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.230321 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.230351 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.230360 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.230373 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.230383 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:32Z","lastTransitionTime":"2025-12-05T19:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.333094 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.333649 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.334135 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.334498 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.334859 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:32Z","lastTransitionTime":"2025-12-05T19:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.437660 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.437712 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.437729 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.437753 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.437773 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:32Z","lastTransitionTime":"2025-12-05T19:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.448039 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.448117 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:32 crc kubenswrapper[4828]: E1205 19:05:32.448683 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.448282 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:32 crc kubenswrapper[4828]: E1205 19:05:32.448877 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.448236 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:32 crc kubenswrapper[4828]: E1205 19:05:32.449131 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:05:32 crc kubenswrapper[4828]: E1205 19:05:32.449353 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.508988 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dthbt" podStartSLOduration=90.508962215 podStartE2EDuration="1m30.508962215s" podCreationTimestamp="2025-12-05 19:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:05:32.479068726 +0000 UTC m=+110.374291072" watchObservedRunningTime="2025-12-05 19:05:32.508962215 +0000 UTC m=+110.404184551" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.535752 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=92.535731625 podStartE2EDuration="1m32.535731625s" podCreationTimestamp="2025-12-05 19:04:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:05:32.512969039 +0000 UTC m=+110.408191355" watchObservedRunningTime="2025-12-05 19:05:32.535731625 +0000 UTC m=+110.430953941" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.544974 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.545057 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.545081 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.545111 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.545135 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:32Z","lastTransitionTime":"2025-12-05T19:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.582700 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-nm8v4" podStartSLOduration=91.582672085 podStartE2EDuration="1m31.582672085s" podCreationTimestamp="2025-12-05 19:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:05:32.56693211 +0000 UTC m=+110.462154476" watchObservedRunningTime="2025-12-05 19:05:32.582672085 +0000 UTC m=+110.477894401" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.596871 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-phlsx" podStartSLOduration=91.59684941 podStartE2EDuration="1m31.59684941s" podCreationTimestamp="2025-12-05 19:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:05:32.583146837 +0000 UTC m=+110.478369153" watchObservedRunningTime="2025-12-05 19:05:32.59684941 +0000 UTC m=+110.492071746" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.620216 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podStartSLOduration=91.620197132 podStartE2EDuration="1m31.620197132s" podCreationTimestamp="2025-12-05 19:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:05:32.597044835 +0000 UTC m=+110.492267151" watchObservedRunningTime="2025-12-05 19:05:32.620197132 +0000 UTC m=+110.515419448" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.620524 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=88.62051829 podStartE2EDuration="1m28.62051829s" podCreationTimestamp="2025-12-05 19:04:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:05:32.619476583 +0000 UTC m=+110.514698909" watchObservedRunningTime="2025-12-05 19:05:32.62051829 +0000 UTC m=+110.515740606" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.636684 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=92.636662505 podStartE2EDuration="1m32.636662505s" podCreationTimestamp="2025-12-05 19:04:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:05:32.63644585 +0000 UTC m=+110.531668206" watchObservedRunningTime="2025-12-05 19:05:32.636662505 +0000 UTC m=+110.531884831" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.647840 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.647880 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.647891 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.647907 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.647918 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:32Z","lastTransitionTime":"2025-12-05T19:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.687994 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-ksv4w" podStartSLOduration=91.687975928 podStartE2EDuration="1m31.687975928s" podCreationTimestamp="2025-12-05 19:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:05:32.687578068 +0000 UTC m=+110.582800374" watchObservedRunningTime="2025-12-05 19:05:32.687975928 +0000 UTC m=+110.583198234" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.705437 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=64.705416578 podStartE2EDuration="1m4.705416578s" podCreationTimestamp="2025-12-05 19:04:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:05:32.705056218 +0000 UTC m=+110.600278544" watchObservedRunningTime="2025-12-05 19:05:32.705416578 +0000 UTC m=+110.600638874" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.749647 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.749674 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.749682 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.749694 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.749704 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:32Z","lastTransitionTime":"2025-12-05T19:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.760905 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-lnk88" podStartSLOduration=91.760887926 podStartE2EDuration="1m31.760887926s" podCreationTimestamp="2025-12-05 19:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:05:32.76023073 +0000 UTC m=+110.655453046" watchObservedRunningTime="2025-12-05 19:05:32.760887926 +0000 UTC m=+110.656110232" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.770555 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=27.770540565 podStartE2EDuration="27.770540565s" podCreationTimestamp="2025-12-05 19:05:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:05:32.770022272 +0000 UTC m=+110.665244588" watchObservedRunningTime="2025-12-05 19:05:32.770540565 +0000 UTC m=+110.665762871" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.852744 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.852858 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.852873 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.852901 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.852914 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:32Z","lastTransitionTime":"2025-12-05T19:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.956160 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.956198 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.956221 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.956238 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:32 crc kubenswrapper[4828]: I1205 19:05:32.956248 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:32Z","lastTransitionTime":"2025-12-05T19:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.059172 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.059236 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.059253 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.059275 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.059289 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:33Z","lastTransitionTime":"2025-12-05T19:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.161748 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.161815 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.161869 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.161896 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.161913 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:33Z","lastTransitionTime":"2025-12-05T19:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.265168 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.265240 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.265267 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.265296 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.265319 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:33Z","lastTransitionTime":"2025-12-05T19:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.368226 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.368278 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.368290 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.368308 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.368321 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:33Z","lastTransitionTime":"2025-12-05T19:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.471178 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.471254 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.471273 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.471300 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.471320 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:33Z","lastTransitionTime":"2025-12-05T19:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.574110 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.574185 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.574207 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.574242 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.574263 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:33Z","lastTransitionTime":"2025-12-05T19:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.676936 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.677004 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.677027 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.677058 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.677080 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:33Z","lastTransitionTime":"2025-12-05T19:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.780666 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.780745 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.780770 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.780803 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.780876 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:33Z","lastTransitionTime":"2025-12-05T19:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.883528 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.883607 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.883630 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.883657 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.883677 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:33Z","lastTransitionTime":"2025-12-05T19:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.987190 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.987258 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.987269 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.987283 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:33 crc kubenswrapper[4828]: I1205 19:05:33.987293 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:33Z","lastTransitionTime":"2025-12-05T19:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.090926 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.090983 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.091005 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.091030 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.091048 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:34Z","lastTransitionTime":"2025-12-05T19:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.193889 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.193950 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.193972 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.193996 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.194014 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:34Z","lastTransitionTime":"2025-12-05T19:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.296724 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.296761 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.296771 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.296785 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.296796 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:34Z","lastTransitionTime":"2025-12-05T19:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.399388 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.399436 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.399448 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.399474 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.399489 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:34Z","lastTransitionTime":"2025-12-05T19:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.446182 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:34 crc kubenswrapper[4828]: E1205 19:05:34.446417 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.446778 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:34 crc kubenswrapper[4828]: E1205 19:05:34.446940 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.447155 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.447219 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:34 crc kubenswrapper[4828]: E1205 19:05:34.447349 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:05:34 crc kubenswrapper[4828]: E1205 19:05:34.447621 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.502029 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.502101 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.502124 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.502155 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.502254 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:34Z","lastTransitionTime":"2025-12-05T19:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.605745 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.605849 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.605869 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.605895 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.605913 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:34Z","lastTransitionTime":"2025-12-05T19:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.708495 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.708537 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.708548 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.708598 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.708614 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:34Z","lastTransitionTime":"2025-12-05T19:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.811393 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.811437 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.811446 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.811460 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.811469 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:34Z","lastTransitionTime":"2025-12-05T19:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.913481 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.913519 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.913529 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.913541 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:34 crc kubenswrapper[4828]: I1205 19:05:34.913550 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:34Z","lastTransitionTime":"2025-12-05T19:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.015967 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.016038 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.016057 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.016084 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.016103 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:35Z","lastTransitionTime":"2025-12-05T19:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.118993 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.119051 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.119063 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.119082 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.119092 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:35Z","lastTransitionTime":"2025-12-05T19:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.221276 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.221339 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.221358 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.221382 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.221397 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:35Z","lastTransitionTime":"2025-12-05T19:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.324339 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.324422 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.324444 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.324473 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.324490 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:35Z","lastTransitionTime":"2025-12-05T19:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.427564 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.427684 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.427702 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.427728 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.427746 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:35Z","lastTransitionTime":"2025-12-05T19:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.530395 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.530471 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.530508 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.530539 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.530563 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:35Z","lastTransitionTime":"2025-12-05T19:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.633875 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.633933 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.633973 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.634003 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.634023 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:35Z","lastTransitionTime":"2025-12-05T19:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.736796 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.736904 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.736924 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.736947 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.736964 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:35Z","lastTransitionTime":"2025-12-05T19:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.839586 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.839708 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.839731 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.839762 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.839868 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:35Z","lastTransitionTime":"2025-12-05T19:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.942646 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.942701 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.942718 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.942740 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:35 crc kubenswrapper[4828]: I1205 19:05:35.942757 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:35Z","lastTransitionTime":"2025-12-05T19:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.045427 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.045480 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.045496 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.045519 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.045539 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:36Z","lastTransitionTime":"2025-12-05T19:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.148375 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.148425 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.148441 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.148461 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.148478 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:36Z","lastTransitionTime":"2025-12-05T19:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.219927 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.219964 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.219972 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.219985 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.219993 4828 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T19:05:36Z","lastTransitionTime":"2025-12-05T19:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.275374 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-l6clb"] Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.275768 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l6clb" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.279310 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.279520 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.279676 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.279918 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.382300 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/868aaff2-0641-44ae-8912-d3f858a8e580-service-ca\") pod \"cluster-version-operator-5c965bbfc6-l6clb\" (UID: \"868aaff2-0641-44ae-8912-d3f858a8e580\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l6clb" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.382350 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/868aaff2-0641-44ae-8912-d3f858a8e580-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-l6clb\" (UID: \"868aaff2-0641-44ae-8912-d3f858a8e580\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l6clb" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.382387 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/868aaff2-0641-44ae-8912-d3f858a8e580-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-l6clb\" (UID: \"868aaff2-0641-44ae-8912-d3f858a8e580\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l6clb" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.382415 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/868aaff2-0641-44ae-8912-d3f858a8e580-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-l6clb\" (UID: \"868aaff2-0641-44ae-8912-d3f858a8e580\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l6clb" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.382493 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/868aaff2-0641-44ae-8912-d3f858a8e580-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-l6clb\" (UID: \"868aaff2-0641-44ae-8912-d3f858a8e580\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l6clb" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.446455 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:36 crc kubenswrapper[4828]: E1205 19:05:36.446770 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.447038 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.447092 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:36 crc kubenswrapper[4828]: E1205 19:05:36.447160 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.447307 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:36 crc kubenswrapper[4828]: E1205 19:05:36.447368 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:05:36 crc kubenswrapper[4828]: E1205 19:05:36.447505 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.483945 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/868aaff2-0641-44ae-8912-d3f858a8e580-service-ca\") pod \"cluster-version-operator-5c965bbfc6-l6clb\" (UID: \"868aaff2-0641-44ae-8912-d3f858a8e580\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l6clb" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.484014 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/868aaff2-0641-44ae-8912-d3f858a8e580-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-l6clb\" (UID: \"868aaff2-0641-44ae-8912-d3f858a8e580\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l6clb" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.484078 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/868aaff2-0641-44ae-8912-d3f858a8e580-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-l6clb\" (UID: \"868aaff2-0641-44ae-8912-d3f858a8e580\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l6clb" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.484125 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/868aaff2-0641-44ae-8912-d3f858a8e580-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-l6clb\" (UID: \"868aaff2-0641-44ae-8912-d3f858a8e580\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l6clb" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.484171 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/868aaff2-0641-44ae-8912-d3f858a8e580-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-l6clb\" (UID: \"868aaff2-0641-44ae-8912-d3f858a8e580\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l6clb" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.484258 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/868aaff2-0641-44ae-8912-d3f858a8e580-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-l6clb\" (UID: \"868aaff2-0641-44ae-8912-d3f858a8e580\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l6clb" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.484475 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/868aaff2-0641-44ae-8912-d3f858a8e580-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-l6clb\" (UID: \"868aaff2-0641-44ae-8912-d3f858a8e580\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l6clb" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.485992 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/868aaff2-0641-44ae-8912-d3f858a8e580-service-ca\") pod \"cluster-version-operator-5c965bbfc6-l6clb\" (UID: \"868aaff2-0641-44ae-8912-d3f858a8e580\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l6clb" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.499648 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/868aaff2-0641-44ae-8912-d3f858a8e580-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-l6clb\" (UID: \"868aaff2-0641-44ae-8912-d3f858a8e580\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l6clb" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.501986 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/868aaff2-0641-44ae-8912-d3f858a8e580-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-l6clb\" (UID: \"868aaff2-0641-44ae-8912-d3f858a8e580\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l6clb" Dec 05 19:05:36 crc kubenswrapper[4828]: I1205 19:05:36.590676 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l6clb" Dec 05 19:05:36 crc kubenswrapper[4828]: W1205 19:05:36.612321 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod868aaff2_0641_44ae_8912_d3f858a8e580.slice/crio-2fd0f935285d89b6f2f4bf4f5b28047594462e8de589cafb2373c7b3c73196e2 WatchSource:0}: Error finding container 2fd0f935285d89b6f2f4bf4f5b28047594462e8de589cafb2373c7b3c73196e2: Status 404 returned error can't find the container with id 2fd0f935285d89b6f2f4bf4f5b28047594462e8de589cafb2373c7b3c73196e2 Dec 05 19:05:37 crc kubenswrapper[4828]: I1205 19:05:37.258063 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ksv4w_e927a669-7d9d-442a-b020-339804e95af2/kube-multus/1.log" Dec 05 19:05:37 crc kubenswrapper[4828]: I1205 19:05:37.259083 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ksv4w_e927a669-7d9d-442a-b020-339804e95af2/kube-multus/0.log" Dec 05 19:05:37 crc kubenswrapper[4828]: I1205 19:05:37.259128 4828 generic.go:334] "Generic (PLEG): container finished" podID="e927a669-7d9d-442a-b020-339804e95af2" containerID="836afc5e512e0143f7845dcdb8e4ca67de1b0558e78ff4e96b2674810b4152d5" exitCode=1 Dec 05 19:05:37 crc kubenswrapper[4828]: I1205 19:05:37.259189 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ksv4w" event={"ID":"e927a669-7d9d-442a-b020-339804e95af2","Type":"ContainerDied","Data":"836afc5e512e0143f7845dcdb8e4ca67de1b0558e78ff4e96b2674810b4152d5"} Dec 05 19:05:37 crc kubenswrapper[4828]: I1205 19:05:37.259223 4828 scope.go:117] "RemoveContainer" containerID="a373586f1f6bb564cd4f5908a45b1b3213b57585927ed5268758d9a1eb5e1864" Dec 05 19:05:37 crc kubenswrapper[4828]: I1205 19:05:37.259555 4828 scope.go:117] "RemoveContainer" containerID="836afc5e512e0143f7845dcdb8e4ca67de1b0558e78ff4e96b2674810b4152d5" Dec 05 19:05:37 crc kubenswrapper[4828]: E1205 19:05:37.259717 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-ksv4w_openshift-multus(e927a669-7d9d-442a-b020-339804e95af2)\"" pod="openshift-multus/multus-ksv4w" podUID="e927a669-7d9d-442a-b020-339804e95af2" Dec 05 19:05:37 crc kubenswrapper[4828]: I1205 19:05:37.260918 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l6clb" event={"ID":"868aaff2-0641-44ae-8912-d3f858a8e580","Type":"ContainerStarted","Data":"1a006e7cf56fe453423e1dcd1421356d85c629d196e2ea4c49f1ddb36e8c41e7"} Dec 05 19:05:37 crc kubenswrapper[4828]: I1205 19:05:37.260947 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l6clb" event={"ID":"868aaff2-0641-44ae-8912-d3f858a8e580","Type":"ContainerStarted","Data":"2fd0f935285d89b6f2f4bf4f5b28047594462e8de589cafb2373c7b3c73196e2"} Dec 05 19:05:37 crc kubenswrapper[4828]: I1205 19:05:37.320018 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l6clb" podStartSLOduration=96.319996761 podStartE2EDuration="1m36.319996761s" podCreationTimestamp="2025-12-05 19:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:05:37.319321694 +0000 UTC m=+115.214544010" watchObservedRunningTime="2025-12-05 19:05:37.319996761 +0000 UTC m=+115.215219067" Dec 05 19:05:38 crc kubenswrapper[4828]: I1205 19:05:38.265759 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ksv4w_e927a669-7d9d-442a-b020-339804e95af2/kube-multus/1.log" Dec 05 19:05:38 crc kubenswrapper[4828]: I1205 19:05:38.446282 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:38 crc kubenswrapper[4828]: I1205 19:05:38.446360 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:38 crc kubenswrapper[4828]: I1205 19:05:38.446713 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:38 crc kubenswrapper[4828]: E1205 19:05:38.446897 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:05:38 crc kubenswrapper[4828]: I1205 19:05:38.447001 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:38 crc kubenswrapper[4828]: E1205 19:05:38.447048 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:05:38 crc kubenswrapper[4828]: E1205 19:05:38.447177 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:05:38 crc kubenswrapper[4828]: E1205 19:05:38.447340 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:05:39 crc kubenswrapper[4828]: I1205 19:05:39.447134 4828 scope.go:117] "RemoveContainer" containerID="b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd" Dec 05 19:05:39 crc kubenswrapper[4828]: E1205 19:05:39.447381 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tzshq_openshift-ovn-kubernetes(1be569ff-0725-412f-ac1a-da4f5077bc17)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" Dec 05 19:05:40 crc kubenswrapper[4828]: I1205 19:05:40.445873 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:40 crc kubenswrapper[4828]: I1205 19:05:40.445979 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:40 crc kubenswrapper[4828]: E1205 19:05:40.446107 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:05:40 crc kubenswrapper[4828]: I1205 19:05:40.446173 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:40 crc kubenswrapper[4828]: E1205 19:05:40.446260 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:05:40 crc kubenswrapper[4828]: E1205 19:05:40.446455 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:05:40 crc kubenswrapper[4828]: I1205 19:05:40.446630 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:40 crc kubenswrapper[4828]: E1205 19:05:40.446841 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:05:42 crc kubenswrapper[4828]: E1205 19:05:42.427133 4828 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 05 19:05:42 crc kubenswrapper[4828]: I1205 19:05:42.446134 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:42 crc kubenswrapper[4828]: I1205 19:05:42.446159 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:42 crc kubenswrapper[4828]: I1205 19:05:42.446141 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:42 crc kubenswrapper[4828]: I1205 19:05:42.446464 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:42 crc kubenswrapper[4828]: E1205 19:05:42.446454 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:05:42 crc kubenswrapper[4828]: E1205 19:05:42.446578 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:05:42 crc kubenswrapper[4828]: E1205 19:05:42.446636 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:05:42 crc kubenswrapper[4828]: E1205 19:05:42.446714 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:05:42 crc kubenswrapper[4828]: E1205 19:05:42.541379 4828 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 05 19:05:44 crc kubenswrapper[4828]: I1205 19:05:44.446365 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:44 crc kubenswrapper[4828]: I1205 19:05:44.446444 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:44 crc kubenswrapper[4828]: I1205 19:05:44.446365 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:44 crc kubenswrapper[4828]: I1205 19:05:44.446406 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:44 crc kubenswrapper[4828]: E1205 19:05:44.446592 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:05:44 crc kubenswrapper[4828]: E1205 19:05:44.446799 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:05:44 crc kubenswrapper[4828]: E1205 19:05:44.446918 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:05:44 crc kubenswrapper[4828]: E1205 19:05:44.446979 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:05:46 crc kubenswrapper[4828]: I1205 19:05:46.445785 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:46 crc kubenswrapper[4828]: I1205 19:05:46.445905 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:46 crc kubenswrapper[4828]: I1205 19:05:46.445809 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:46 crc kubenswrapper[4828]: I1205 19:05:46.446054 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:46 crc kubenswrapper[4828]: E1205 19:05:46.446067 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:05:46 crc kubenswrapper[4828]: E1205 19:05:46.446139 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:05:46 crc kubenswrapper[4828]: E1205 19:05:46.446201 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:05:46 crc kubenswrapper[4828]: E1205 19:05:46.446405 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:05:47 crc kubenswrapper[4828]: E1205 19:05:47.543225 4828 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 05 19:05:48 crc kubenswrapper[4828]: I1205 19:05:48.446040 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:48 crc kubenswrapper[4828]: E1205 19:05:48.446182 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:05:48 crc kubenswrapper[4828]: I1205 19:05:48.446244 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:48 crc kubenswrapper[4828]: E1205 19:05:48.446303 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:05:48 crc kubenswrapper[4828]: I1205 19:05:48.446658 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:48 crc kubenswrapper[4828]: E1205 19:05:48.446753 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:05:48 crc kubenswrapper[4828]: I1205 19:05:48.447019 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:48 crc kubenswrapper[4828]: E1205 19:05:48.447098 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:05:50 crc kubenswrapper[4828]: I1205 19:05:50.446043 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:50 crc kubenswrapper[4828]: I1205 19:05:50.446067 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:50 crc kubenswrapper[4828]: I1205 19:05:50.446065 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:50 crc kubenswrapper[4828]: E1205 19:05:50.446180 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:05:50 crc kubenswrapper[4828]: I1205 19:05:50.446225 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:50 crc kubenswrapper[4828]: E1205 19:05:50.446360 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:05:50 crc kubenswrapper[4828]: E1205 19:05:50.446612 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:05:50 crc kubenswrapper[4828]: E1205 19:05:50.446736 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:05:52 crc kubenswrapper[4828]: I1205 19:05:52.446067 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:52 crc kubenswrapper[4828]: I1205 19:05:52.446181 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:52 crc kubenswrapper[4828]: E1205 19:05:52.446206 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:05:52 crc kubenswrapper[4828]: I1205 19:05:52.446243 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:52 crc kubenswrapper[4828]: I1205 19:05:52.446270 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:52 crc kubenswrapper[4828]: E1205 19:05:52.446372 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:05:52 crc kubenswrapper[4828]: E1205 19:05:52.446529 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:05:52 crc kubenswrapper[4828]: E1205 19:05:52.446680 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:05:52 crc kubenswrapper[4828]: I1205 19:05:52.446859 4828 scope.go:117] "RemoveContainer" containerID="836afc5e512e0143f7845dcdb8e4ca67de1b0558e78ff4e96b2674810b4152d5" Dec 05 19:05:52 crc kubenswrapper[4828]: E1205 19:05:52.543986 4828 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 05 19:05:53 crc kubenswrapper[4828]: I1205 19:05:53.314931 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ksv4w_e927a669-7d9d-442a-b020-339804e95af2/kube-multus/1.log" Dec 05 19:05:53 crc kubenswrapper[4828]: I1205 19:05:53.315000 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ksv4w" event={"ID":"e927a669-7d9d-442a-b020-339804e95af2","Type":"ContainerStarted","Data":"f0c1e0c0274d4cf63dbe8ececdf93484842b90a7184f096364b27673f0f76250"} Dec 05 19:05:53 crc kubenswrapper[4828]: I1205 19:05:53.446853 4828 scope.go:117] "RemoveContainer" containerID="b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd" Dec 05 19:05:54 crc kubenswrapper[4828]: I1205 19:05:54.193362 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bvf6n"] Dec 05 19:05:54 crc kubenswrapper[4828]: I1205 19:05:54.193475 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:54 crc kubenswrapper[4828]: E1205 19:05:54.193569 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:05:54 crc kubenswrapper[4828]: I1205 19:05:54.320035 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tzshq_1be569ff-0725-412f-ac1a-da4f5077bc17/ovnkube-controller/3.log" Dec 05 19:05:54 crc kubenswrapper[4828]: I1205 19:05:54.322468 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" event={"ID":"1be569ff-0725-412f-ac1a-da4f5077bc17","Type":"ContainerStarted","Data":"425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f"} Dec 05 19:05:54 crc kubenswrapper[4828]: I1205 19:05:54.322899 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:05:54 crc kubenswrapper[4828]: I1205 19:05:54.354896 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" podStartSLOduration=113.354876253 podStartE2EDuration="1m53.354876253s" podCreationTimestamp="2025-12-05 19:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:05:54.353970769 +0000 UTC m=+132.249193075" watchObservedRunningTime="2025-12-05 19:05:54.354876253 +0000 UTC m=+132.250098559" Dec 05 19:05:54 crc kubenswrapper[4828]: I1205 19:05:54.448056 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:54 crc kubenswrapper[4828]: I1205 19:05:54.448119 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:54 crc kubenswrapper[4828]: I1205 19:05:54.448069 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:54 crc kubenswrapper[4828]: E1205 19:05:54.448226 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:05:54 crc kubenswrapper[4828]: E1205 19:05:54.448334 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:05:54 crc kubenswrapper[4828]: E1205 19:05:54.448403 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:05:56 crc kubenswrapper[4828]: I1205 19:05:56.445890 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:56 crc kubenswrapper[4828]: I1205 19:05:56.446121 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:56 crc kubenswrapper[4828]: I1205 19:05:56.446029 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:56 crc kubenswrapper[4828]: I1205 19:05:56.446007 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:56 crc kubenswrapper[4828]: E1205 19:05:56.446291 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bvf6n" podUID="0595333b-a181-4a2b-90b8-e2accf80e78e" Dec 05 19:05:56 crc kubenswrapper[4828]: E1205 19:05:56.446504 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 19:05:56 crc kubenswrapper[4828]: E1205 19:05:56.446499 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 19:05:56 crc kubenswrapper[4828]: E1205 19:05:56.446622 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 19:05:58 crc kubenswrapper[4828]: I1205 19:05:58.445958 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:05:58 crc kubenswrapper[4828]: I1205 19:05:58.445996 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:05:58 crc kubenswrapper[4828]: I1205 19:05:58.446056 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:05:58 crc kubenswrapper[4828]: I1205 19:05:58.446104 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:05:58 crc kubenswrapper[4828]: I1205 19:05:58.448424 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 05 19:05:58 crc kubenswrapper[4828]: I1205 19:05:58.448743 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 05 19:05:58 crc kubenswrapper[4828]: I1205 19:05:58.450942 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 05 19:05:58 crc kubenswrapper[4828]: I1205 19:05:58.451538 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 05 19:05:58 crc kubenswrapper[4828]: I1205 19:05:58.453412 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 05 19:05:58 crc kubenswrapper[4828]: I1205 19:05:58.454722 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 05 19:06:04 crc kubenswrapper[4828]: I1205 19:06:04.646040 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:06:05 crc kubenswrapper[4828]: I1205 19:06:05.260105 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:06:05 crc kubenswrapper[4828]: I1205 19:06:05.260207 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:06:06 crc kubenswrapper[4828]: I1205 19:06:06.968164 4828 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.015368 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vbgcx"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.015937 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-vbgcx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.017026 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.017858 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.018749 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.019106 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.019217 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.019224 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fn8zx"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.020254 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fn8zx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.020858 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.021367 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.022861 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-f24wr"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.023586 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-f24wr" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.026947 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xdjfd"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.027636 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq99m"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.028193 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq99m" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.028819 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xdjfd" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.029901 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.030841 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.031014 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.031188 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.031522 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.032919 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.033904 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rkdvk"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.035087 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.039356 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.039982 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.040590 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.042810 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.043184 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.043214 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.046880 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-6zl9m"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.047418 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b6wdx"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.047720 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.047783 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zl9m" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.048140 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.048212 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7gd82"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.047747 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-b6wdx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.048607 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.049142 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.049908 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5fck"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.050414 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5fck" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.050565 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gd82" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.050868 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b6nk4"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.051188 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.058498 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.058890 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.059165 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.059449 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.059691 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.059909 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.060139 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.060244 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.060366 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.065130 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.065356 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.065464 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.065715 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.065920 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.066071 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.066176 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.066529 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.066666 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.066704 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.066805 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.066816 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.066910 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.066934 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.066990 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.070091 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.070230 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.070344 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.070478 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.070555 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.070623 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.070697 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.070767 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.070856 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.071070 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.071228 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.071346 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.071425 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.071616 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.086052 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.086356 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.089061 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.089987 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.091536 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.092035 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.102909 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.102940 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.103053 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.102920 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.103202 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.103279 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.103457 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.103589 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.103675 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.103288 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.103348 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.104560 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.103406 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.104766 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.105438 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.106108 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.106363 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m957x"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.106855 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-m957x" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.110193 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.112036 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-hlnsw"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.113617 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-q9sfv"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.113972 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-q9sfv" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.114268 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-hlnsw" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.119613 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.119983 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.120206 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.120240 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.121489 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.121687 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.121895 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.121993 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.122772 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd88g\" (UniqueName: \"kubernetes.io/projected/cfae7bfb-b250-48de-ad6b-e741405f07c3-kube-api-access-wd88g\") pod \"cluster-samples-operator-665b6dd947-fn8zx\" (UID: \"cfae7bfb-b250-48de-ad6b-e741405f07c3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fn8zx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.122811 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/724b0be1-4d6c-4b96-933d-d94b8f146bd8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-smwg8\" (UID: \"724b0be1-4d6c-4b96-933d-d94b8f146bd8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.122860 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b2ef96f-044d-4b5e-97b8-9e413bc37088-client-ca\") pod \"route-controller-manager-6576b87f9c-rtxpx\" (UID: \"0b2ef96f-044d-4b5e-97b8-9e413bc37088\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.122890 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b2ef96f-044d-4b5e-97b8-9e413bc37088-config\") pod \"route-controller-manager-6576b87f9c-rtxpx\" (UID: \"0b2ef96f-044d-4b5e-97b8-9e413bc37088\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.122916 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/724b0be1-4d6c-4b96-933d-d94b8f146bd8-etcd-client\") pod \"apiserver-7bbb656c7d-smwg8\" (UID: \"724b0be1-4d6c-4b96-933d-d94b8f146bd8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.122938 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/724b0be1-4d6c-4b96-933d-d94b8f146bd8-encryption-config\") pod \"apiserver-7bbb656c7d-smwg8\" (UID: \"724b0be1-4d6c-4b96-933d-d94b8f146bd8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.122960 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/724b0be1-4d6c-4b96-933d-d94b8f146bd8-audit-dir\") pod \"apiserver-7bbb656c7d-smwg8\" (UID: \"724b0be1-4d6c-4b96-933d-d94b8f146bd8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.122985 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl9bq\" (UniqueName: \"kubernetes.io/projected/724b0be1-4d6c-4b96-933d-d94b8f146bd8-kube-api-access-cl9bq\") pod \"apiserver-7bbb656c7d-smwg8\" (UID: \"724b0be1-4d6c-4b96-933d-d94b8f146bd8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.123515 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/724b0be1-4d6c-4b96-933d-d94b8f146bd8-audit-policies\") pod \"apiserver-7bbb656c7d-smwg8\" (UID: \"724b0be1-4d6c-4b96-933d-d94b8f146bd8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.123588 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/cfae7bfb-b250-48de-ad6b-e741405f07c3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-fn8zx\" (UID: \"cfae7bfb-b250-48de-ad6b-e741405f07c3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fn8zx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.123855 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6pwl\" (UniqueName: \"kubernetes.io/projected/e5365032-f31f-4e90-bb94-193e5d6dcc9f-kube-api-access-l6pwl\") pod \"machine-api-operator-5694c8668f-vbgcx\" (UID: \"e5365032-f31f-4e90-bb94-193e5d6dcc9f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbgcx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.123938 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5365032-f31f-4e90-bb94-193e5d6dcc9f-config\") pod \"machine-api-operator-5694c8668f-vbgcx\" (UID: \"e5365032-f31f-4e90-bb94-193e5d6dcc9f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbgcx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.123979 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b2ef96f-044d-4b5e-97b8-9e413bc37088-serving-cert\") pod \"route-controller-manager-6576b87f9c-rtxpx\" (UID: \"0b2ef96f-044d-4b5e-97b8-9e413bc37088\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.124013 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvd4h\" (UniqueName: \"kubernetes.io/projected/0b2ef96f-044d-4b5e-97b8-9e413bc37088-kube-api-access-lvd4h\") pod \"route-controller-manager-6576b87f9c-rtxpx\" (UID: \"0b2ef96f-044d-4b5e-97b8-9e413bc37088\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.124040 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e5365032-f31f-4e90-bb94-193e5d6dcc9f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vbgcx\" (UID: \"e5365032-f31f-4e90-bb94-193e5d6dcc9f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbgcx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.124884 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/724b0be1-4d6c-4b96-933d-d94b8f146bd8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-smwg8\" (UID: \"724b0be1-4d6c-4b96-933d-d94b8f146bd8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.124927 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/724b0be1-4d6c-4b96-933d-d94b8f146bd8-serving-cert\") pod \"apiserver-7bbb656c7d-smwg8\" (UID: \"724b0be1-4d6c-4b96-933d-d94b8f146bd8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.124952 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e5365032-f31f-4e90-bb94-193e5d6dcc9f-images\") pod \"machine-api-operator-5694c8668f-vbgcx\" (UID: \"e5365032-f31f-4e90-bb94-193e5d6dcc9f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbgcx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.126286 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vlwt4"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.126952 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vlwt4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.127361 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-ws4t8"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.141493 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-q8sfr"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.141747 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rpjqf"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.137661 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.142167 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rpjqf" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.142366 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-ws4t8" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.137702 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.142509 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-q8sfr" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.144027 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kn6kp"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.144456 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kn6kp" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.137728 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.138360 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.138402 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.138438 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.145190 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.145190 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.138938 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.140618 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.138465 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.146276 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.146522 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tphbb"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.164665 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.167970 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.169059 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tphbb" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.169290 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9knxn"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.180853 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.181060 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.184144 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.185482 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vbgcx"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.185525 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gcfs9"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.185908 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-qp52c"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.186057 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9knxn" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.186221 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-l9zft"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.186364 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gcfs9" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.186547 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-twqzk"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.186656 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-qp52c" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.186915 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-9xcvs"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.187028 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-twqzk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.187125 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l9zft" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.187474 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.187636 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-9xcvs" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.187879 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sjlcb"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.188259 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sjlcb" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.188884 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wk88t"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.189356 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.193413 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bbvkt"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.194272 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bbvkt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.196395 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mz8xv"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.197048 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mz8xv" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.197212 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zqxhr"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.197673 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqxhr" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.197722 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.199548 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l26k6"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.200062 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l26k6" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.200120 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416020-s5gks"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.200854 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416020-s5gks" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.200964 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76wk9"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.201497 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76wk9" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.201897 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.202758 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hcgdz"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.203217 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hcgdz" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.203722 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k42q6"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.204354 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k42q6" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.204906 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.205777 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fn8zx"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.206962 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xdjfd"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.208909 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-xk5tk"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.209698 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-xk5tk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.209783 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m957x"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.210755 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-f24wr"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.211650 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9knxn"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.213847 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq99m"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.215252 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b6nk4"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.215289 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gcfs9"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.216134 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kn6kp"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.217048 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-q9sfv"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.217542 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.217968 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sjlcb"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.219202 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rkdvk"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.220751 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7gd82"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.221670 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bbvkt"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.222572 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5fck"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.223491 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rpjqf"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.225549 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ffe87ead-c1e1-4126-8c85-3054648d6990-encryption-config\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.225573 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.225594 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.225623 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-service-ca\") pod \"console-f9d7485db-q9sfv\" (UID: \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\") " pod="openshift-console/console-f9d7485db-q9sfv" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.225637 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2cf96d2c-9865-437a-a87c-63ca051a421d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-m957x\" (UID: \"2cf96d2c-9865-437a-a87c-63ca051a421d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m957x" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.225654 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzzws\" (UniqueName: \"kubernetes.io/projected/24805259-8822-4373-aca0-68442e07b891-kube-api-access-lzzws\") pod \"openshift-apiserver-operator-796bbdcf4f-tq99m\" (UID: \"24805259-8822-4373-aca0-68442e07b891\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq99m" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.225670 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e8d4020-b5ce-4b96-ba2a-ce605a9a514b-serving-cert\") pod \"openshift-config-operator-7777fb866f-7gd82\" (UID: \"3e8d4020-b5ce-4b96-ba2a-ce605a9a514b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gd82" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.225714 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b2ef96f-044d-4b5e-97b8-9e413bc37088-serving-cert\") pod \"route-controller-manager-6576b87f9c-rtxpx\" (UID: \"0b2ef96f-044d-4b5e-97b8-9e413bc37088\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.225745 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvd4h\" (UniqueName: \"kubernetes.io/projected/0b2ef96f-044d-4b5e-97b8-9e413bc37088-kube-api-access-lvd4h\") pod \"route-controller-manager-6576b87f9c-rtxpx\" (UID: \"0b2ef96f-044d-4b5e-97b8-9e413bc37088\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.225775 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3e8d4020-b5ce-4b96-ba2a-ce605a9a514b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7gd82\" (UID: \"3e8d4020-b5ce-4b96-ba2a-ce605a9a514b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gd82" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.225799 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18690a90-a6e9-43ef-8550-e90caacb0d95-service-ca-bundle\") pod \"authentication-operator-69f744f599-b6wdx\" (UID: \"18690a90-a6e9-43ef-8550-e90caacb0d95\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b6wdx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.225838 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/90285316-ecf0-4bd7-a9bf-72325863399c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xdjfd\" (UID: \"90285316-ecf0-4bd7-a9bf-72325863399c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xdjfd" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.225862 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffe87ead-c1e1-4126-8c85-3054648d6990-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.225882 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.225904 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.225929 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdmks\" (UniqueName: \"kubernetes.io/projected/3e8d4020-b5ce-4b96-ba2a-ce605a9a514b-kube-api-access-jdmks\") pod \"openshift-config-operator-7777fb866f-7gd82\" (UID: \"3e8d4020-b5ce-4b96-ba2a-ce605a9a514b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gd82" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.225956 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e5365032-f31f-4e90-bb94-193e5d6dcc9f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vbgcx\" (UID: \"e5365032-f31f-4e90-bb94-193e5d6dcc9f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbgcx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.225980 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ffe87ead-c1e1-4126-8c85-3054648d6990-etcd-serving-ca\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226004 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/724b0be1-4d6c-4b96-933d-d94b8f146bd8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-smwg8\" (UID: \"724b0be1-4d6c-4b96-933d-d94b8f146bd8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226029 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/90285316-ecf0-4bd7-a9bf-72325863399c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xdjfd\" (UID: \"90285316-ecf0-4bd7-a9bf-72325863399c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xdjfd" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226052 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/724b0be1-4d6c-4b96-933d-d94b8f146bd8-serving-cert\") pod \"apiserver-7bbb656c7d-smwg8\" (UID: \"724b0be1-4d6c-4b96-933d-d94b8f146bd8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226071 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e5365032-f31f-4e90-bb94-193e5d6dcc9f-images\") pod \"machine-api-operator-5694c8668f-vbgcx\" (UID: \"e5365032-f31f-4e90-bb94-193e5d6dcc9f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbgcx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226093 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99fg2\" (UniqueName: \"kubernetes.io/projected/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-kube-api-access-99fg2\") pod \"console-f9d7485db-q9sfv\" (UID: \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\") " pod="openshift-console/console-f9d7485db-q9sfv" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226118 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd88g\" (UniqueName: \"kubernetes.io/projected/cfae7bfb-b250-48de-ad6b-e741405f07c3-kube-api-access-wd88g\") pod \"cluster-samples-operator-665b6dd947-fn8zx\" (UID: \"cfae7bfb-b250-48de-ad6b-e741405f07c3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fn8zx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226143 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqqz6\" (UniqueName: \"kubernetes.io/projected/67625800-270c-4a66-95d0-a2853c23c26f-kube-api-access-rqqz6\") pod \"openshift-controller-manager-operator-756b6f6bc6-l5fck\" (UID: \"67625800-270c-4a66-95d0-a2853c23c26f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5fck" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226165 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5e09f6d-a6ad-4ad7-afe3-b18d06c1fa48-config\") pod \"machine-approver-56656f9798-6zl9m\" (UID: \"c5e09f6d-a6ad-4ad7-afe3-b18d06c1fa48\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zl9m" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226185 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ffe87ead-c1e1-4126-8c85-3054648d6990-image-import-ca\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226208 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226227 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8b608ed-2cef-43a4-8b6e-70efed829e65-config\") pod \"console-operator-58897d9998-hlnsw\" (UID: \"b8b608ed-2cef-43a4-8b6e-70efed829e65\") " pod="openshift-console-operator/console-operator-58897d9998-hlnsw" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226247 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18690a90-a6e9-43ef-8550-e90caacb0d95-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b6wdx\" (UID: \"18690a90-a6e9-43ef-8550-e90caacb0d95\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b6wdx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226276 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-oauth-serving-cert\") pod \"console-f9d7485db-q9sfv\" (UID: \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\") " pod="openshift-console/console-f9d7485db-q9sfv" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226298 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226319 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8b608ed-2cef-43a4-8b6e-70efed829e65-trusted-ca\") pod \"console-operator-58897d9998-hlnsw\" (UID: \"b8b608ed-2cef-43a4-8b6e-70efed829e65\") " pod="openshift-console-operator/console-operator-58897d9998-hlnsw" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226340 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90285316-ecf0-4bd7-a9bf-72325863399c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xdjfd\" (UID: \"90285316-ecf0-4bd7-a9bf-72325863399c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xdjfd" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226361 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ffe87ead-c1e1-4126-8c85-3054648d6990-etcd-client\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226382 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cf96d2c-9865-437a-a87c-63ca051a421d-client-ca\") pod \"controller-manager-879f6c89f-m957x\" (UID: \"2cf96d2c-9865-437a-a87c-63ca051a421d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m957x" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226402 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8b608ed-2cef-43a4-8b6e-70efed829e65-serving-cert\") pod \"console-operator-58897d9998-hlnsw\" (UID: \"b8b608ed-2cef-43a4-8b6e-70efed829e65\") " pod="openshift-console-operator/console-operator-58897d9998-hlnsw" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226436 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c5e09f6d-a6ad-4ad7-afe3-b18d06c1fa48-machine-approver-tls\") pod \"machine-approver-56656f9798-6zl9m\" (UID: \"c5e09f6d-a6ad-4ad7-afe3-b18d06c1fa48\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zl9m" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226460 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/724b0be1-4d6c-4b96-933d-d94b8f146bd8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-smwg8\" (UID: \"724b0be1-4d6c-4b96-933d-d94b8f146bd8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226481 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b2ef96f-044d-4b5e-97b8-9e413bc37088-client-ca\") pod \"route-controller-manager-6576b87f9c-rtxpx\" (UID: \"0b2ef96f-044d-4b5e-97b8-9e413bc37088\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226505 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffe87ead-c1e1-4126-8c85-3054648d6990-serving-cert\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226531 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf96d2c-9865-437a-a87c-63ca051a421d-config\") pod \"controller-manager-879f6c89f-m957x\" (UID: \"2cf96d2c-9865-437a-a87c-63ca051a421d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m957x" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226552 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226573 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-trusted-ca-bundle\") pod \"console-f9d7485db-q9sfv\" (UID: \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\") " pod="openshift-console/console-f9d7485db-q9sfv" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226595 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-audit-dir\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226616 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226638 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-console-config\") pod \"console-f9d7485db-q9sfv\" (UID: \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\") " pod="openshift-console/console-f9d7485db-q9sfv" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226658 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxgth\" (UniqueName: \"kubernetes.io/projected/dd460169-7ac7-48de-95a0-4c8ec9fd2d31-kube-api-access-zxgth\") pod \"downloads-7954f5f757-f24wr\" (UID: \"dd460169-7ac7-48de-95a0-4c8ec9fd2d31\") " pod="openshift-console/downloads-7954f5f757-f24wr" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226684 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffe87ead-c1e1-4126-8c85-3054648d6990-config\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226808 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cf96d2c-9865-437a-a87c-63ca051a421d-serving-cert\") pod \"controller-manager-879f6c89f-m957x\" (UID: \"2cf96d2c-9865-437a-a87c-63ca051a421d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m957x" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226841 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-audit-policies\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226860 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226879 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b2ef96f-044d-4b5e-97b8-9e413bc37088-config\") pod \"route-controller-manager-6576b87f9c-rtxpx\" (UID: \"0b2ef96f-044d-4b5e-97b8-9e413bc37088\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226896 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-console-serving-cert\") pod \"console-f9d7485db-q9sfv\" (UID: \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\") " pod="openshift-console/console-f9d7485db-q9sfv" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226911 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24805259-8822-4373-aca0-68442e07b891-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tq99m\" (UID: \"24805259-8822-4373-aca0-68442e07b891\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq99m" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226927 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67625800-270c-4a66-95d0-a2853c23c26f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-l5fck\" (UID: \"67625800-270c-4a66-95d0-a2853c23c26f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5fck" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226942 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4qq6\" (UniqueName: \"kubernetes.io/projected/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-kube-api-access-q4qq6\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226963 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/724b0be1-4d6c-4b96-933d-d94b8f146bd8-etcd-client\") pod \"apiserver-7bbb656c7d-smwg8\" (UID: \"724b0be1-4d6c-4b96-933d-d94b8f146bd8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226977 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/724b0be1-4d6c-4b96-933d-d94b8f146bd8-encryption-config\") pod \"apiserver-7bbb656c7d-smwg8\" (UID: \"724b0be1-4d6c-4b96-933d-d94b8f146bd8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.226992 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccbjh\" (UniqueName: \"kubernetes.io/projected/ffe87ead-c1e1-4126-8c85-3054648d6990-kube-api-access-ccbjh\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.227007 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/724b0be1-4d6c-4b96-933d-d94b8f146bd8-audit-dir\") pod \"apiserver-7bbb656c7d-smwg8\" (UID: \"724b0be1-4d6c-4b96-933d-d94b8f146bd8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.227024 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl9bq\" (UniqueName: \"kubernetes.io/projected/724b0be1-4d6c-4b96-933d-d94b8f146bd8-kube-api-access-cl9bq\") pod \"apiserver-7bbb656c7d-smwg8\" (UID: \"724b0be1-4d6c-4b96-933d-d94b8f146bd8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.227869 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/724b0be1-4d6c-4b96-933d-d94b8f146bd8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-smwg8\" (UID: \"724b0be1-4d6c-4b96-933d-d94b8f146bd8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.228258 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/724b0be1-4d6c-4b96-933d-d94b8f146bd8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-smwg8\" (UID: \"724b0be1-4d6c-4b96-933d-d94b8f146bd8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.228927 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b2ef96f-044d-4b5e-97b8-9e413bc37088-client-ca\") pod \"route-controller-manager-6576b87f9c-rtxpx\" (UID: \"0b2ef96f-044d-4b5e-97b8-9e413bc37088\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.229163 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e5365032-f31f-4e90-bb94-193e5d6dcc9f-images\") pod \"machine-api-operator-5694c8668f-vbgcx\" (UID: \"e5365032-f31f-4e90-bb94-193e5d6dcc9f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbgcx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.229465 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-twqzk"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.229496 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-qp52c"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.230503 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/724b0be1-4d6c-4b96-933d-d94b8f146bd8-audit-dir\") pod \"apiserver-7bbb656c7d-smwg8\" (UID: \"724b0be1-4d6c-4b96-933d-d94b8f146bd8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.230564 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdf8b\" (UniqueName: \"kubernetes.io/projected/2cf96d2c-9865-437a-a87c-63ca051a421d-kube-api-access-sdf8b\") pod \"controller-manager-879f6c89f-m957x\" (UID: \"2cf96d2c-9865-437a-a87c-63ca051a421d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m957x" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.230594 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/724b0be1-4d6c-4b96-933d-d94b8f146bd8-audit-policies\") pod \"apiserver-7bbb656c7d-smwg8\" (UID: \"724b0be1-4d6c-4b96-933d-d94b8f146bd8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.230614 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/cfae7bfb-b250-48de-ad6b-e741405f07c3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-fn8zx\" (UID: \"cfae7bfb-b250-48de-ad6b-e741405f07c3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fn8zx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.230654 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18690a90-a6e9-43ef-8550-e90caacb0d95-config\") pod \"authentication-operator-69f744f599-b6wdx\" (UID: \"18690a90-a6e9-43ef-8550-e90caacb0d95\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b6wdx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.230673 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c5e09f6d-a6ad-4ad7-afe3-b18d06c1fa48-auth-proxy-config\") pod \"machine-approver-56656f9798-6zl9m\" (UID: \"c5e09f6d-a6ad-4ad7-afe3-b18d06c1fa48\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zl9m" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.231290 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ffe87ead-c1e1-4126-8c85-3054648d6990-node-pullsecrets\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.231351 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/724b0be1-4d6c-4b96-933d-d94b8f146bd8-audit-policies\") pod \"apiserver-7bbb656c7d-smwg8\" (UID: \"724b0be1-4d6c-4b96-933d-d94b8f146bd8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.231360 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mz8xv"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.231371 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhfpj\" (UniqueName: \"kubernetes.io/projected/18690a90-a6e9-43ef-8550-e90caacb0d95-kube-api-access-lhfpj\") pod \"authentication-operator-69f744f599-b6wdx\" (UID: \"18690a90-a6e9-43ef-8550-e90caacb0d95\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b6wdx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.231401 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6qnz\" (UniqueName: \"kubernetes.io/projected/c5e09f6d-a6ad-4ad7-afe3-b18d06c1fa48-kube-api-access-x6qnz\") pod \"machine-approver-56656f9798-6zl9m\" (UID: \"c5e09f6d-a6ad-4ad7-afe3-b18d06c1fa48\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zl9m" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.231459 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67625800-270c-4a66-95d0-a2853c23c26f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-l5fck\" (UID: \"67625800-270c-4a66-95d0-a2853c23c26f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5fck" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.231498 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.231528 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18690a90-a6e9-43ef-8550-e90caacb0d95-serving-cert\") pod \"authentication-operator-69f744f599-b6wdx\" (UID: \"18690a90-a6e9-43ef-8550-e90caacb0d95\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b6wdx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.231549 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmw6t\" (UniqueName: \"kubernetes.io/projected/b8b608ed-2cef-43a4-8b6e-70efed829e65-kube-api-access-lmw6t\") pod \"console-operator-58897d9998-hlnsw\" (UID: \"b8b608ed-2cef-43a4-8b6e-70efed829e65\") " pod="openshift-console-operator/console-operator-58897d9998-hlnsw" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.231579 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlb4b\" (UniqueName: \"kubernetes.io/projected/90285316-ecf0-4bd7-a9bf-72325863399c-kube-api-access-tlb4b\") pod \"cluster-image-registry-operator-dc59b4c8b-xdjfd\" (UID: \"90285316-ecf0-4bd7-a9bf-72325863399c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xdjfd" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.231642 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6pwl\" (UniqueName: \"kubernetes.io/projected/e5365032-f31f-4e90-bb94-193e5d6dcc9f-kube-api-access-l6pwl\") pod \"machine-api-operator-5694c8668f-vbgcx\" (UID: \"e5365032-f31f-4e90-bb94-193e5d6dcc9f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbgcx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.231657 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b2ef96f-044d-4b5e-97b8-9e413bc37088-config\") pod \"route-controller-manager-6576b87f9c-rtxpx\" (UID: \"0b2ef96f-044d-4b5e-97b8-9e413bc37088\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.231706 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.231750 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ffe87ead-c1e1-4126-8c85-3054648d6990-audit\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.231786 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-console-oauth-config\") pod \"console-f9d7485db-q9sfv\" (UID: \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\") " pod="openshift-console/console-f9d7485db-q9sfv" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.231847 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24805259-8822-4373-aca0-68442e07b891-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tq99m\" (UID: \"24805259-8822-4373-aca0-68442e07b891\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq99m" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.231980 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e5365032-f31f-4e90-bb94-193e5d6dcc9f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vbgcx\" (UID: \"e5365032-f31f-4e90-bb94-193e5d6dcc9f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbgcx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.231988 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/724b0be1-4d6c-4b96-933d-d94b8f146bd8-serving-cert\") pod \"apiserver-7bbb656c7d-smwg8\" (UID: \"724b0be1-4d6c-4b96-933d-d94b8f146bd8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.231998 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5365032-f31f-4e90-bb94-193e5d6dcc9f-config\") pod \"machine-api-operator-5694c8668f-vbgcx\" (UID: \"e5365032-f31f-4e90-bb94-193e5d6dcc9f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbgcx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.232052 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ffe87ead-c1e1-4126-8c85-3054648d6990-audit-dir\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.232680 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5365032-f31f-4e90-bb94-193e5d6dcc9f-config\") pod \"machine-api-operator-5694c8668f-vbgcx\" (UID: \"e5365032-f31f-4e90-bb94-193e5d6dcc9f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbgcx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.233927 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/724b0be1-4d6c-4b96-933d-d94b8f146bd8-encryption-config\") pod \"apiserver-7bbb656c7d-smwg8\" (UID: \"724b0be1-4d6c-4b96-933d-d94b8f146bd8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.236353 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/724b0be1-4d6c-4b96-933d-d94b8f146bd8-etcd-client\") pod \"apiserver-7bbb656c7d-smwg8\" (UID: \"724b0be1-4d6c-4b96-933d-d94b8f146bd8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.238601 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b2ef96f-044d-4b5e-97b8-9e413bc37088-serving-cert\") pod \"route-controller-manager-6576b87f9c-rtxpx\" (UID: \"0b2ef96f-044d-4b5e-97b8-9e413bc37088\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.243629 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/cfae7bfb-b250-48de-ad6b-e741405f07c3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-fn8zx\" (UID: \"cfae7bfb-b250-48de-ad6b-e741405f07c3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fn8zx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.244523 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-l9zft"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.244565 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l26k6"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.244577 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zqxhr"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.244588 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b6wdx"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.244603 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-9xcvs"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.244617 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-hlnsw"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.245993 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-q8sfr"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.249382 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vlwt4"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.249468 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416020-s5gks"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.253478 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gvjlq"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.254585 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-dz45b"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.255019 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tphbb"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.255699 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gvjlq" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.256180 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-dz45b" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.257498 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.261443 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wk88t"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.263105 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k42q6"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.264383 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hcgdz"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.265778 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-dz45b"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.267415 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gvjlq"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.269474 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76wk9"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.271054 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-jcw75"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.272055 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jcw75" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.273774 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jcw75"] Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.277907 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.297801 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.318395 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333142 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99fg2\" (UniqueName: \"kubernetes.io/projected/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-kube-api-access-99fg2\") pod \"console-f9d7485db-q9sfv\" (UID: \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\") " pod="openshift-console/console-f9d7485db-q9sfv" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333170 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/90285316-ecf0-4bd7-a9bf-72325863399c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xdjfd\" (UID: \"90285316-ecf0-4bd7-a9bf-72325863399c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xdjfd" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333198 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqqz6\" (UniqueName: \"kubernetes.io/projected/67625800-270c-4a66-95d0-a2853c23c26f-kube-api-access-rqqz6\") pod \"openshift-controller-manager-operator-756b6f6bc6-l5fck\" (UID: \"67625800-270c-4a66-95d0-a2853c23c26f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5fck" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333215 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5e09f6d-a6ad-4ad7-afe3-b18d06c1fa48-config\") pod \"machine-approver-56656f9798-6zl9m\" (UID: \"c5e09f6d-a6ad-4ad7-afe3-b18d06c1fa48\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zl9m" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333233 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18690a90-a6e9-43ef-8550-e90caacb0d95-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b6wdx\" (UID: \"18690a90-a6e9-43ef-8550-e90caacb0d95\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b6wdx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333251 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ffe87ead-c1e1-4126-8c85-3054648d6990-image-import-ca\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333268 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333283 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8b608ed-2cef-43a4-8b6e-70efed829e65-config\") pod \"console-operator-58897d9998-hlnsw\" (UID: \"b8b608ed-2cef-43a4-8b6e-70efed829e65\") " pod="openshift-console-operator/console-operator-58897d9998-hlnsw" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333301 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8b608ed-2cef-43a4-8b6e-70efed829e65-trusted-ca\") pod \"console-operator-58897d9998-hlnsw\" (UID: \"b8b608ed-2cef-43a4-8b6e-70efed829e65\") " pod="openshift-console-operator/console-operator-58897d9998-hlnsw" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333324 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-oauth-serving-cert\") pod \"console-f9d7485db-q9sfv\" (UID: \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\") " pod="openshift-console/console-f9d7485db-q9sfv" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333370 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333392 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90285316-ecf0-4bd7-a9bf-72325863399c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xdjfd\" (UID: \"90285316-ecf0-4bd7-a9bf-72325863399c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xdjfd" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333412 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ffe87ead-c1e1-4126-8c85-3054648d6990-etcd-client\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333436 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cf96d2c-9865-437a-a87c-63ca051a421d-client-ca\") pod \"controller-manager-879f6c89f-m957x\" (UID: \"2cf96d2c-9865-437a-a87c-63ca051a421d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m957x" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333459 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8b608ed-2cef-43a4-8b6e-70efed829e65-serving-cert\") pod \"console-operator-58897d9998-hlnsw\" (UID: \"b8b608ed-2cef-43a4-8b6e-70efed829e65\") " pod="openshift-console-operator/console-operator-58897d9998-hlnsw" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333490 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c5e09f6d-a6ad-4ad7-afe3-b18d06c1fa48-machine-approver-tls\") pod \"machine-approver-56656f9798-6zl9m\" (UID: \"c5e09f6d-a6ad-4ad7-afe3-b18d06c1fa48\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zl9m" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333519 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffe87ead-c1e1-4126-8c85-3054648d6990-serving-cert\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333543 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333566 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf96d2c-9865-437a-a87c-63ca051a421d-config\") pod \"controller-manager-879f6c89f-m957x\" (UID: \"2cf96d2c-9865-437a-a87c-63ca051a421d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m957x" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333588 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-trusted-ca-bundle\") pod \"console-f9d7485db-q9sfv\" (UID: \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\") " pod="openshift-console/console-f9d7485db-q9sfv" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333610 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-audit-dir\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333633 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333654 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxgth\" (UniqueName: \"kubernetes.io/projected/dd460169-7ac7-48de-95a0-4c8ec9fd2d31-kube-api-access-zxgth\") pod \"downloads-7954f5f757-f24wr\" (UID: \"dd460169-7ac7-48de-95a0-4c8ec9fd2d31\") " pod="openshift-console/downloads-7954f5f757-f24wr" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333675 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-console-config\") pod \"console-f9d7485db-q9sfv\" (UID: \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\") " pod="openshift-console/console-f9d7485db-q9sfv" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333695 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffe87ead-c1e1-4126-8c85-3054648d6990-config\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333715 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cf96d2c-9865-437a-a87c-63ca051a421d-serving-cert\") pod \"controller-manager-879f6c89f-m957x\" (UID: \"2cf96d2c-9865-437a-a87c-63ca051a421d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m957x" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333739 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-audit-policies\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333762 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333784 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24805259-8822-4373-aca0-68442e07b891-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tq99m\" (UID: \"24805259-8822-4373-aca0-68442e07b891\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq99m" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333806 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-console-serving-cert\") pod \"console-f9d7485db-q9sfv\" (UID: \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\") " pod="openshift-console/console-f9d7485db-q9sfv" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333866 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67625800-270c-4a66-95d0-a2853c23c26f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-l5fck\" (UID: \"67625800-270c-4a66-95d0-a2853c23c26f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5fck" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333892 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4qq6\" (UniqueName: \"kubernetes.io/projected/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-kube-api-access-q4qq6\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333915 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccbjh\" (UniqueName: \"kubernetes.io/projected/ffe87ead-c1e1-4126-8c85-3054648d6990-kube-api-access-ccbjh\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333945 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdf8b\" (UniqueName: \"kubernetes.io/projected/2cf96d2c-9865-437a-a87c-63ca051a421d-kube-api-access-sdf8b\") pod \"controller-manager-879f6c89f-m957x\" (UID: \"2cf96d2c-9865-437a-a87c-63ca051a421d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m957x" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333965 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c5e09f6d-a6ad-4ad7-afe3-b18d06c1fa48-auth-proxy-config\") pod \"machine-approver-56656f9798-6zl9m\" (UID: \"c5e09f6d-a6ad-4ad7-afe3-b18d06c1fa48\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zl9m" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.333986 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18690a90-a6e9-43ef-8550-e90caacb0d95-config\") pod \"authentication-operator-69f744f599-b6wdx\" (UID: \"18690a90-a6e9-43ef-8550-e90caacb0d95\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b6wdx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334009 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ffe87ead-c1e1-4126-8c85-3054648d6990-node-pullsecrets\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334043 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6qnz\" (UniqueName: \"kubernetes.io/projected/c5e09f6d-a6ad-4ad7-afe3-b18d06c1fa48-kube-api-access-x6qnz\") pod \"machine-approver-56656f9798-6zl9m\" (UID: \"c5e09f6d-a6ad-4ad7-afe3-b18d06c1fa48\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zl9m" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334068 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhfpj\" (UniqueName: \"kubernetes.io/projected/18690a90-a6e9-43ef-8550-e90caacb0d95-kube-api-access-lhfpj\") pod \"authentication-operator-69f744f599-b6wdx\" (UID: \"18690a90-a6e9-43ef-8550-e90caacb0d95\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b6wdx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334099 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18690a90-a6e9-43ef-8550-e90caacb0d95-serving-cert\") pod \"authentication-operator-69f744f599-b6wdx\" (UID: \"18690a90-a6e9-43ef-8550-e90caacb0d95\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b6wdx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334131 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67625800-270c-4a66-95d0-a2853c23c26f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-l5fck\" (UID: \"67625800-270c-4a66-95d0-a2853c23c26f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5fck" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334157 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334180 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmw6t\" (UniqueName: \"kubernetes.io/projected/b8b608ed-2cef-43a4-8b6e-70efed829e65-kube-api-access-lmw6t\") pod \"console-operator-58897d9998-hlnsw\" (UID: \"b8b608ed-2cef-43a4-8b6e-70efed829e65\") " pod="openshift-console-operator/console-operator-58897d9998-hlnsw" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334204 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlb4b\" (UniqueName: \"kubernetes.io/projected/90285316-ecf0-4bd7-a9bf-72325863399c-kube-api-access-tlb4b\") pod \"cluster-image-registry-operator-dc59b4c8b-xdjfd\" (UID: \"90285316-ecf0-4bd7-a9bf-72325863399c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xdjfd" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334244 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334278 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ffe87ead-c1e1-4126-8c85-3054648d6990-audit\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334305 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-console-oauth-config\") pod \"console-f9d7485db-q9sfv\" (UID: \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\") " pod="openshift-console/console-f9d7485db-q9sfv" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334327 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24805259-8822-4373-aca0-68442e07b891-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tq99m\" (UID: \"24805259-8822-4373-aca0-68442e07b891\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq99m" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334349 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ffe87ead-c1e1-4126-8c85-3054648d6990-audit-dir\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334372 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ffe87ead-c1e1-4126-8c85-3054648d6990-encryption-config\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334401 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334423 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334458 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-service-ca\") pod \"console-f9d7485db-q9sfv\" (UID: \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\") " pod="openshift-console/console-f9d7485db-q9sfv" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334461 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-oauth-serving-cert\") pod \"console-f9d7485db-q9sfv\" (UID: \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\") " pod="openshift-console/console-f9d7485db-q9sfv" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334487 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2cf96d2c-9865-437a-a87c-63ca051a421d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-m957x\" (UID: \"2cf96d2c-9865-437a-a87c-63ca051a421d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m957x" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334461 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8b608ed-2cef-43a4-8b6e-70efed829e65-trusted-ca\") pod \"console-operator-58897d9998-hlnsw\" (UID: \"b8b608ed-2cef-43a4-8b6e-70efed829e65\") " pod="openshift-console-operator/console-operator-58897d9998-hlnsw" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334516 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzzws\" (UniqueName: \"kubernetes.io/projected/24805259-8822-4373-aca0-68442e07b891-kube-api-access-lzzws\") pod \"openshift-apiserver-operator-796bbdcf4f-tq99m\" (UID: \"24805259-8822-4373-aca0-68442e07b891\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq99m" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334520 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cf96d2c-9865-437a-a87c-63ca051a421d-client-ca\") pod \"controller-manager-879f6c89f-m957x\" (UID: \"2cf96d2c-9865-437a-a87c-63ca051a421d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m957x" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334541 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e8d4020-b5ce-4b96-ba2a-ce605a9a514b-serving-cert\") pod \"openshift-config-operator-7777fb866f-7gd82\" (UID: \"3e8d4020-b5ce-4b96-ba2a-ce605a9a514b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gd82" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334565 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3e8d4020-b5ce-4b96-ba2a-ce605a9a514b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7gd82\" (UID: \"3e8d4020-b5ce-4b96-ba2a-ce605a9a514b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gd82" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334591 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334614 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18690a90-a6e9-43ef-8550-e90caacb0d95-service-ca-bundle\") pod \"authentication-operator-69f744f599-b6wdx\" (UID: \"18690a90-a6e9-43ef-8550-e90caacb0d95\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b6wdx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334636 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/90285316-ecf0-4bd7-a9bf-72325863399c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xdjfd\" (UID: \"90285316-ecf0-4bd7-a9bf-72325863399c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xdjfd" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334660 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffe87ead-c1e1-4126-8c85-3054648d6990-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334680 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90285316-ecf0-4bd7-a9bf-72325863399c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xdjfd\" (UID: \"90285316-ecf0-4bd7-a9bf-72325863399c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xdjfd" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334682 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334731 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18690a90-a6e9-43ef-8550-e90caacb0d95-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b6wdx\" (UID: \"18690a90-a6e9-43ef-8550-e90caacb0d95\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b6wdx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334742 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdmks\" (UniqueName: \"kubernetes.io/projected/3e8d4020-b5ce-4b96-ba2a-ce605a9a514b-kube-api-access-jdmks\") pod \"openshift-config-operator-7777fb866f-7gd82\" (UID: \"3e8d4020-b5ce-4b96-ba2a-ce605a9a514b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gd82" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.334909 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ffe87ead-c1e1-4126-8c85-3054648d6990-etcd-serving-ca\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.335238 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8b608ed-2cef-43a4-8b6e-70efed829e65-config\") pod \"console-operator-58897d9998-hlnsw\" (UID: \"b8b608ed-2cef-43a4-8b6e-70efed829e65\") " pod="openshift-console-operator/console-operator-58897d9998-hlnsw" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.335783 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.336237 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5e09f6d-a6ad-4ad7-afe3-b18d06c1fa48-config\") pod \"machine-approver-56656f9798-6zl9m\" (UID: \"c5e09f6d-a6ad-4ad7-afe3-b18d06c1fa48\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zl9m" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.336295 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-trusted-ca-bundle\") pod \"console-f9d7485db-q9sfv\" (UID: \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\") " pod="openshift-console/console-f9d7485db-q9sfv" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.336355 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/90285316-ecf0-4bd7-a9bf-72325863399c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xdjfd\" (UID: \"90285316-ecf0-4bd7-a9bf-72325863399c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xdjfd" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.336971 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.337071 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-audit-dir\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.337351 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.337399 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3e8d4020-b5ce-4b96-ba2a-ce605a9a514b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7gd82\" (UID: \"3e8d4020-b5ce-4b96-ba2a-ce605a9a514b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gd82" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.337461 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-audit-policies\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.337495 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-service-ca\") pod \"console-f9d7485db-q9sfv\" (UID: \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\") " pod="openshift-console/console-f9d7485db-q9sfv" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.337970 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ffe87ead-c1e1-4126-8c85-3054648d6990-image-import-ca\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.338060 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ffe87ead-c1e1-4126-8c85-3054648d6990-node-pullsecrets\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.338289 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c5e09f6d-a6ad-4ad7-afe3-b18d06c1fa48-auth-proxy-config\") pod \"machine-approver-56656f9798-6zl9m\" (UID: \"c5e09f6d-a6ad-4ad7-afe3-b18d06c1fa48\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zl9m" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.338531 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18690a90-a6e9-43ef-8550-e90caacb0d95-service-ca-bundle\") pod \"authentication-operator-69f744f599-b6wdx\" (UID: \"18690a90-a6e9-43ef-8550-e90caacb0d95\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b6wdx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.338963 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.339009 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18690a90-a6e9-43ef-8550-e90caacb0d95-config\") pod \"authentication-operator-69f744f599-b6wdx\" (UID: \"18690a90-a6e9-43ef-8550-e90caacb0d95\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b6wdx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.339032 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ffe87ead-c1e1-4126-8c85-3054648d6990-audit\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.339213 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf96d2c-9865-437a-a87c-63ca051a421d-config\") pod \"controller-manager-879f6c89f-m957x\" (UID: \"2cf96d2c-9865-437a-a87c-63ca051a421d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m957x" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.339283 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67625800-270c-4a66-95d0-a2853c23c26f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-l5fck\" (UID: \"67625800-270c-4a66-95d0-a2853c23c26f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5fck" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.339356 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffe87ead-c1e1-4126-8c85-3054648d6990-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.339380 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2cf96d2c-9865-437a-a87c-63ca051a421d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-m957x\" (UID: \"2cf96d2c-9865-437a-a87c-63ca051a421d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m957x" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.339532 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.339531 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffe87ead-c1e1-4126-8c85-3054648d6990-config\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.339643 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.339767 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ffe87ead-c1e1-4126-8c85-3054648d6990-encryption-config\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.340189 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.340242 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ffe87ead-c1e1-4126-8c85-3054648d6990-etcd-serving-ca\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.340307 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ffe87ead-c1e1-4126-8c85-3054648d6990-audit-dir\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.338962 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-console-config\") pod \"console-f9d7485db-q9sfv\" (UID: \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\") " pod="openshift-console/console-f9d7485db-q9sfv" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.340712 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cf96d2c-9865-437a-a87c-63ca051a421d-serving-cert\") pod \"controller-manager-879f6c89f-m957x\" (UID: \"2cf96d2c-9865-437a-a87c-63ca051a421d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m957x" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.340791 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24805259-8822-4373-aca0-68442e07b891-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tq99m\" (UID: \"24805259-8822-4373-aca0-68442e07b891\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq99m" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.341631 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18690a90-a6e9-43ef-8550-e90caacb0d95-serving-cert\") pod \"authentication-operator-69f744f599-b6wdx\" (UID: \"18690a90-a6e9-43ef-8550-e90caacb0d95\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b6wdx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.342264 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ffe87ead-c1e1-4126-8c85-3054648d6990-etcd-client\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.342446 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c5e09f6d-a6ad-4ad7-afe3-b18d06c1fa48-machine-approver-tls\") pod \"machine-approver-56656f9798-6zl9m\" (UID: \"c5e09f6d-a6ad-4ad7-afe3-b18d06c1fa48\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zl9m" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.342578 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e8d4020-b5ce-4b96-ba2a-ce605a9a514b-serving-cert\") pod \"openshift-config-operator-7777fb866f-7gd82\" (UID: \"3e8d4020-b5ce-4b96-ba2a-ce605a9a514b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gd82" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.342581 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffe87ead-c1e1-4126-8c85-3054648d6990-serving-cert\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.342620 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.342790 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8b608ed-2cef-43a4-8b6e-70efed829e65-serving-cert\") pod \"console-operator-58897d9998-hlnsw\" (UID: \"b8b608ed-2cef-43a4-8b6e-70efed829e65\") " pod="openshift-console-operator/console-operator-58897d9998-hlnsw" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.343144 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.343474 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24805259-8822-4373-aca0-68442e07b891-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tq99m\" (UID: \"24805259-8822-4373-aca0-68442e07b891\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq99m" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.343758 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67625800-270c-4a66-95d0-a2853c23c26f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-l5fck\" (UID: \"67625800-270c-4a66-95d0-a2853c23c26f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5fck" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.343906 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.344113 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.344549 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.344756 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-console-serving-cert\") pod \"console-f9d7485db-q9sfv\" (UID: \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\") " pod="openshift-console/console-f9d7485db-q9sfv" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.358410 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-console-oauth-config\") pod \"console-f9d7485db-q9sfv\" (UID: \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\") " pod="openshift-console/console-f9d7485db-q9sfv" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.359258 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.378651 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.398417 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.417682 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.438282 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.458245 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.480795 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.499225 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.517680 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.537628 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.557549 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.578533 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.598178 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.619367 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.638526 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.658517 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.679344 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.698443 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.718423 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.738366 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.758087 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.777663 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.799047 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.818971 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.837672 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.857954 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.878647 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.898094 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.918064 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.938564 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.959430 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.979210 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 05 19:06:07 crc kubenswrapper[4828]: I1205 19:06:07.998398 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.018225 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.038745 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.058676 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.078192 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.098182 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.118007 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.138904 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.158373 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.178471 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.196586 4828 request.go:700] Waited for 1.008088781s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.199071 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.219569 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.239383 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.258475 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.280008 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.299053 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.319682 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.338037 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.347818 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.352121 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.359011 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.363568 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.378757 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.398520 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.418229 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.438086 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.449006 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.449190 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.449407 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.449470 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.450358 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:06:08 crc kubenswrapper[4828]: E1205 19:06:08.450453 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:08:10.450434092 +0000 UTC m=+268.345656408 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.455701 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.458052 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.463320 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.478680 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.498801 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.518614 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.539797 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 05 19:06:08 crc kubenswrapper[4828]: W1205 19:06:08.550946 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-e76d39ed0f9ea6f70c71a7b694a6dd8a045e086f27171e609066dca7229a31b0 WatchSource:0}: Error finding container e76d39ed0f9ea6f70c71a7b694a6dd8a045e086f27171e609066dca7229a31b0: Status 404 returned error can't find the container with id e76d39ed0f9ea6f70c71a7b694a6dd8a045e086f27171e609066dca7229a31b0 Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.558040 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.577662 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.603859 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.618955 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.644653 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.658226 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.677182 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.678783 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.698108 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.698114 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.718643 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.737986 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.758718 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.778495 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.798631 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.817916 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.837693 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 05 19:06:08 crc kubenswrapper[4828]: W1205 19:06:08.870672 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-966214ceb51a55d54092fdc08ad55892140467406ffc4570dea3d2035182ecd5 WatchSource:0}: Error finding container 966214ceb51a55d54092fdc08ad55892140467406ffc4570dea3d2035182ecd5: Status 404 returned error can't find the container with id 966214ceb51a55d54092fdc08ad55892140467406ffc4570dea3d2035182ecd5 Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.893697 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvd4h\" (UniqueName: \"kubernetes.io/projected/0b2ef96f-044d-4b5e-97b8-9e413bc37088-kube-api-access-lvd4h\") pod \"route-controller-manager-6576b87f9c-rtxpx\" (UID: \"0b2ef96f-044d-4b5e-97b8-9e413bc37088\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.914673 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd88g\" (UniqueName: \"kubernetes.io/projected/cfae7bfb-b250-48de-ad6b-e741405f07c3-kube-api-access-wd88g\") pod \"cluster-samples-operator-665b6dd947-fn8zx\" (UID: \"cfae7bfb-b250-48de-ad6b-e741405f07c3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fn8zx" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.923686 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.934904 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl9bq\" (UniqueName: \"kubernetes.io/projected/724b0be1-4d6c-4b96-933d-d94b8f146bd8-kube-api-access-cl9bq\") pod \"apiserver-7bbb656c7d-smwg8\" (UID: \"724b0be1-4d6c-4b96-933d-d94b8f146bd8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.958681 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6pwl\" (UniqueName: \"kubernetes.io/projected/e5365032-f31f-4e90-bb94-193e5d6dcc9f-kube-api-access-l6pwl\") pod \"machine-api-operator-5694c8668f-vbgcx\" (UID: \"e5365032-f31f-4e90-bb94-193e5d6dcc9f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbgcx" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.978855 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 05 19:06:08 crc kubenswrapper[4828]: I1205 19:06:08.998450 4828 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.019909 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.037574 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.058702 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.078789 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.087553 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx"] Dec 05 19:06:09 crc kubenswrapper[4828]: W1205 19:06:09.093783 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b2ef96f_044d_4b5e_97b8_9e413bc37088.slice/crio-ac2b10578ae2cb1f9c339260bc1ac97b89be31b8746aec0c56d7155d0f83015a WatchSource:0}: Error finding container ac2b10578ae2cb1f9c339260bc1ac97b89be31b8746aec0c56d7155d0f83015a: Status 404 returned error can't find the container with id ac2b10578ae2cb1f9c339260bc1ac97b89be31b8746aec0c56d7155d0f83015a Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.098202 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.118531 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.139143 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.149146 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-vbgcx" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.159514 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.190294 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.196599 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99fg2\" (UniqueName: \"kubernetes.io/projected/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-kube-api-access-99fg2\") pod \"console-f9d7485db-q9sfv\" (UID: \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\") " pod="openshift-console/console-f9d7485db-q9sfv" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.196741 4828 request.go:700] Waited for 1.863206303s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/serviceaccounts/openshift-controller-manager-operator/token Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.208339 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fn8zx" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.214710 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqqz6\" (UniqueName: \"kubernetes.io/projected/67625800-270c-4a66-95d0-a2853c23c26f-kube-api-access-rqqz6\") pod \"openshift-controller-manager-operator-756b6f6bc6-l5fck\" (UID: \"67625800-270c-4a66-95d0-a2853c23c26f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5fck" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.247419 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdmks\" (UniqueName: \"kubernetes.io/projected/3e8d4020-b5ce-4b96-ba2a-ce605a9a514b-kube-api-access-jdmks\") pod \"openshift-config-operator-7777fb866f-7gd82\" (UID: \"3e8d4020-b5ce-4b96-ba2a-ce605a9a514b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gd82" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.267217 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6qnz\" (UniqueName: \"kubernetes.io/projected/c5e09f6d-a6ad-4ad7-afe3-b18d06c1fa48-kube-api-access-x6qnz\") pod \"machine-approver-56656f9798-6zl9m\" (UID: \"c5e09f6d-a6ad-4ad7-afe3-b18d06c1fa48\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zl9m" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.278716 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4qq6\" (UniqueName: \"kubernetes.io/projected/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-kube-api-access-q4qq6\") pod \"oauth-openshift-558db77b4-b6nk4\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.297882 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhfpj\" (UniqueName: \"kubernetes.io/projected/18690a90-a6e9-43ef-8550-e90caacb0d95-kube-api-access-lhfpj\") pod \"authentication-operator-69f744f599-b6wdx\" (UID: \"18690a90-a6e9-43ef-8550-e90caacb0d95\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b6wdx" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.316108 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zl9m" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.316764 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccbjh\" (UniqueName: \"kubernetes.io/projected/ffe87ead-c1e1-4126-8c85-3054648d6990-kube-api-access-ccbjh\") pod \"apiserver-76f77b778f-rkdvk\" (UID: \"ffe87ead-c1e1-4126-8c85-3054648d6990\") " pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.324313 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-b6wdx" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.330563 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gd82" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.333163 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdf8b\" (UniqueName: \"kubernetes.io/projected/2cf96d2c-9865-437a-a87c-63ca051a421d-kube-api-access-sdf8b\") pod \"controller-manager-879f6c89f-m957x\" (UID: \"2cf96d2c-9865-437a-a87c-63ca051a421d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m957x" Dec 05 19:06:09 crc kubenswrapper[4828]: W1205 19:06:09.339953 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5e09f6d_a6ad_4ad7_afe3_b18d06c1fa48.slice/crio-b4e1f16c8021e67ebb9a7188c2ab91287fcf2e1c7c16c4a0e9d067d3e6fcf840 WatchSource:0}: Error finding container b4e1f16c8021e67ebb9a7188c2ab91287fcf2e1c7c16c4a0e9d067d3e6fcf840: Status 404 returned error can't find the container with id b4e1f16c8021e67ebb9a7188c2ab91287fcf2e1c7c16c4a0e9d067d3e6fcf840 Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.373095 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzzws\" (UniqueName: \"kubernetes.io/projected/24805259-8822-4373-aca0-68442e07b891-kube-api-access-lzzws\") pod \"openshift-apiserver-operator-796bbdcf4f-tq99m\" (UID: \"24805259-8822-4373-aca0-68442e07b891\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq99m" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.373096 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5fck" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.373216 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.374846 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vbgcx"] Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.377649 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxgth\" (UniqueName: \"kubernetes.io/projected/dd460169-7ac7-48de-95a0-4c8ec9fd2d31-kube-api-access-zxgth\") pod \"downloads-7954f5f757-f24wr\" (UID: \"dd460169-7ac7-48de-95a0-4c8ec9fd2d31\") " pod="openshift-console/downloads-7954f5f757-f24wr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.378238 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-m957x" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.387089 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-q9sfv" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.393044 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/90285316-ecf0-4bd7-a9bf-72325863399c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xdjfd\" (UID: \"90285316-ecf0-4bd7-a9bf-72325863399c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xdjfd" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.401785 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d54c5add25fb0a6ca16235f8828cfb2702ad733cbeef25c8c2bf0d455848a8c3"} Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.401856 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"966214ceb51a55d54092fdc08ad55892140467406ffc4570dea3d2035182ecd5"} Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.402078 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.406377 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zl9m" event={"ID":"c5e09f6d-a6ad-4ad7-afe3-b18d06c1fa48","Type":"ContainerStarted","Data":"b4e1f16c8021e67ebb9a7188c2ab91287fcf2e1c7c16c4a0e9d067d3e6fcf840"} Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.407931 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx" event={"ID":"0b2ef96f-044d-4b5e-97b8-9e413bc37088","Type":"ContainerStarted","Data":"ee36bab915719571c53f14a6366c41121b8f4e0479b67d4faaf27783fa79b1a7"} Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.407980 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx" event={"ID":"0b2ef96f-044d-4b5e-97b8-9e413bc37088","Type":"ContainerStarted","Data":"ac2b10578ae2cb1f9c339260bc1ac97b89be31b8746aec0c56d7155d0f83015a"} Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.408931 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.412456 4828 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-rtxpx container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.412505 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx" podUID="0b2ef96f-044d-4b5e-97b8-9e413bc37088" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.413095 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"6d8b0f03a916726eb4332e0fbf0eb3fb8b1f006c3b4db53d1d7804b6c6d8af67"} Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.413128 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e76d39ed0f9ea6f70c71a7b694a6dd8a045e086f27171e609066dca7229a31b0"} Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.413124 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmw6t\" (UniqueName: \"kubernetes.io/projected/b8b608ed-2cef-43a4-8b6e-70efed829e65-kube-api-access-lmw6t\") pod \"console-operator-58897d9998-hlnsw\" (UID: \"b8b608ed-2cef-43a4-8b6e-70efed829e65\") " pod="openshift-console-operator/console-operator-58897d9998-hlnsw" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.416180 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"61f2153290ebfa10c16010d163a1a238dc6d13cc0f9c90949c01990ec880b633"} Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.416205 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"80d13a2c4a2e83786b4af3ab4fdcb9afbef984efce5ae116d6446ee9a096d242"} Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.433620 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fn8zx"] Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.439017 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlb4b\" (UniqueName: \"kubernetes.io/projected/90285316-ecf0-4bd7-a9bf-72325863399c-kube-api-access-tlb4b\") pod \"cluster-image-registry-operator-dc59b4c8b-xdjfd\" (UID: \"90285316-ecf0-4bd7-a9bf-72325863399c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xdjfd" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.473630 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7866b3d9-f32e-4b75-bade-36891f08ae41-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vlwt4\" (UID: \"7866b3d9-f32e-4b75-bade-36891f08ae41\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vlwt4" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.473802 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5ed41b4-64e6-407a-b3a5-104f2b97b008-secret-volume\") pod \"collect-profiles-29416020-s5gks\" (UID: \"b5ed41b4-64e6-407a-b3a5-104f2b97b008\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416020-s5gks" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.473861 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5ed41b4-64e6-407a-b3a5-104f2b97b008-config-volume\") pod \"collect-profiles-29416020-s5gks\" (UID: \"b5ed41b4-64e6-407a-b3a5-104f2b97b008\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416020-s5gks" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.473887 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ad1915a-9298-4aba-928b-5d3c7d57a7bb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kn6kp\" (UID: \"6ad1915a-9298-4aba-928b-5d3c7d57a7bb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kn6kp" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.473909 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8c8aaf1e-1192-4552-9de6-614d8e325b7b-signing-key\") pod \"service-ca-9c57cc56f-qp52c\" (UID: \"8c8aaf1e-1192-4552-9de6-614d8e325b7b\") " pod="openshift-service-ca/service-ca-9c57cc56f-qp52c" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.473935 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8c8aaf1e-1192-4552-9de6-614d8e325b7b-signing-cabundle\") pod \"service-ca-9c57cc56f-qp52c\" (UID: \"8c8aaf1e-1192-4552-9de6-614d8e325b7b\") " pod="openshift-service-ca/service-ca-9c57cc56f-qp52c" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.473956 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fa7ef61d-c907-4632-b3c7-e25c01f5d2b5-etcd-client\") pod \"etcd-operator-b45778765-q8sfr\" (UID: \"fa7ef61d-c907-4632-b3c7-e25c01f5d2b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8sfr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.474028 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0e55c6a6-4c4a-4548-ba89-a3a34c49124d-images\") pod \"machine-config-operator-74547568cd-bbvkt\" (UID: \"0e55c6a6-4c4a-4548-ba89-a3a34c49124d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bbvkt" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.474052 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnncb\" (UniqueName: \"kubernetes.io/projected/8c8aaf1e-1192-4552-9de6-614d8e325b7b-kube-api-access-bnncb\") pod \"service-ca-9c57cc56f-qp52c\" (UID: \"8c8aaf1e-1192-4552-9de6-614d8e325b7b\") " pod="openshift-service-ca/service-ca-9c57cc56f-qp52c" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.474102 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/12be153a-7f4d-4521-b4d9-def127e51cd5-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zqxhr\" (UID: \"12be153a-7f4d-4521-b4d9-def127e51cd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqxhr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.474166 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e55c6a6-4c4a-4548-ba89-a3a34c49124d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bbvkt\" (UID: \"0e55c6a6-4c4a-4548-ba89-a3a34c49124d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bbvkt" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.474266 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ed1e0a7a-7a77-4343-8c33-e921e149ddab-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hcgdz\" (UID: \"ed1e0a7a-7a77-4343-8c33-e921e149ddab\") " pod="openshift-marketplace/marketplace-operator-79b997595-hcgdz" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.474321 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbnvr\" (UniqueName: \"kubernetes.io/projected/2250ca28-5f36-4c7f-aca3-b71131272a51-kube-api-access-vbnvr\") pod \"kube-storage-version-migrator-operator-b67b599dd-mz8xv\" (UID: \"2250ca28-5f36-4c7f-aca3-b71131272a51\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mz8xv" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.474345 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54ac7001-a300-4f21-b1dd-486db1c1e641-metrics-certs\") pod \"router-default-5444994796-ws4t8\" (UID: \"54ac7001-a300-4f21-b1dd-486db1c1e641\") " pod="openshift-ingress/router-default-5444994796-ws4t8" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.474367 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvxdl\" (UniqueName: \"kubernetes.io/projected/b5ed41b4-64e6-407a-b3a5-104f2b97b008-kube-api-access-tvxdl\") pod \"collect-profiles-29416020-s5gks\" (UID: \"b5ed41b4-64e6-407a-b3a5-104f2b97b008\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416020-s5gks" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.474388 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dd698da2-83bb-4051-b007-bbf97441a6b1-proxy-tls\") pod \"machine-config-controller-84d6567774-tphbb\" (UID: \"dd698da2-83bb-4051-b007-bbf97441a6b1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tphbb" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.474428 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/17e0a9b8-d746-4a17-a424-122b5c30ce75-bound-sa-token\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.474450 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/54ac7001-a300-4f21-b1dd-486db1c1e641-stats-auth\") pod \"router-default-5444994796-ws4t8\" (UID: \"54ac7001-a300-4f21-b1dd-486db1c1e641\") " pod="openshift-ingress/router-default-5444994796-ws4t8" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.474483 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqmj4\" (UniqueName: \"kubernetes.io/projected/d5d3db1a-56ec-426e-b14b-8be9a13c6347-kube-api-access-dqmj4\") pod \"catalog-operator-68c6474976-76wk9\" (UID: \"d5d3db1a-56ec-426e-b14b-8be9a13c6347\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76wk9" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.474560 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12be153a-7f4d-4521-b4d9-def127e51cd5-trusted-ca\") pod \"ingress-operator-5b745b69d9-zqxhr\" (UID: \"12be153a-7f4d-4521-b4d9-def127e51cd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqxhr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.474618 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b8462a37-7879-45ab-91e6-29ae835c9771-webhook-cert\") pod \"packageserver-d55dfcdfc-l26k6\" (UID: \"b8462a37-7879-45ab-91e6-29ae835c9771\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l26k6" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.474680 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e55c6a6-4c4a-4548-ba89-a3a34c49124d-proxy-tls\") pod \"machine-config-operator-74547568cd-bbvkt\" (UID: \"0e55c6a6-4c4a-4548-ba89-a3a34c49124d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bbvkt" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.474731 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2250ca28-5f36-4c7f-aca3-b71131272a51-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mz8xv\" (UID: \"2250ca28-5f36-4c7f-aca3-b71131272a51\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mz8xv" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.474756 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa7ef61d-c907-4632-b3c7-e25c01f5d2b5-config\") pod \"etcd-operator-b45778765-q8sfr\" (UID: \"fa7ef61d-c907-4632-b3c7-e25c01f5d2b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8sfr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.474805 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14ac26da-2122-48f7-8e83-1acb41418490-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-k42q6\" (UID: \"14ac26da-2122-48f7-8e83-1acb41418490\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k42q6" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.474844 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bnrq\" (UniqueName: \"kubernetes.io/projected/fa7ef61d-c907-4632-b3c7-e25c01f5d2b5-kube-api-access-5bnrq\") pod \"etcd-operator-b45778765-q8sfr\" (UID: \"fa7ef61d-c907-4632-b3c7-e25c01f5d2b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8sfr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.474868 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c48b7dac-cabb-40fb-bb18-a587cd1a3184-srv-cert\") pod \"olm-operator-6b444d44fb-twqzk\" (UID: \"c48b7dac-cabb-40fb-bb18-a587cd1a3184\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-twqzk" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.474906 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fjhr\" (UniqueName: \"kubernetes.io/projected/54ac7001-a300-4f21-b1dd-486db1c1e641-kube-api-access-4fjhr\") pod \"router-default-5444994796-ws4t8\" (UID: \"54ac7001-a300-4f21-b1dd-486db1c1e641\") " pod="openshift-ingress/router-default-5444994796-ws4t8" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.474948 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b8462a37-7879-45ab-91e6-29ae835c9771-tmpfs\") pod \"packageserver-d55dfcdfc-l26k6\" (UID: \"b8462a37-7879-45ab-91e6-29ae835c9771\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l26k6" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.474972 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7866b3d9-f32e-4b75-bade-36891f08ae41-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vlwt4\" (UID: \"7866b3d9-f32e-4b75-bade-36891f08ae41\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vlwt4" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475034 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54ac7001-a300-4f21-b1dd-486db1c1e641-service-ca-bundle\") pod \"router-default-5444994796-ws4t8\" (UID: \"54ac7001-a300-4f21-b1dd-486db1c1e641\") " pod="openshift-ingress/router-default-5444994796-ws4t8" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475058 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqwdw\" (UniqueName: \"kubernetes.io/projected/6c46f04e-0d87-4198-8eca-6000b06409c0-kube-api-access-gqwdw\") pod \"migrator-59844c95c7-rpjqf\" (UID: \"6c46f04e-0d87-4198-8eca-6000b06409c0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rpjqf" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475105 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14ac26da-2122-48f7-8e83-1acb41418490-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-k42q6\" (UID: \"14ac26da-2122-48f7-8e83-1acb41418490\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k42q6" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475152 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qcld\" (UniqueName: \"kubernetes.io/projected/6af58b85-a73a-4b78-b663-5e996f555e93-kube-api-access-2qcld\") pod \"package-server-manager-789f6589d5-gcfs9\" (UID: \"6af58b85-a73a-4b78-b663-5e996f555e93\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gcfs9" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475198 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/17e0a9b8-d746-4a17-a424-122b5c30ce75-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475221 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fa7ef61d-c907-4632-b3c7-e25c01f5d2b5-etcd-ca\") pod \"etcd-operator-b45778765-q8sfr\" (UID: \"fa7ef61d-c907-4632-b3c7-e25c01f5d2b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8sfr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475287 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48zmv\" (UniqueName: \"kubernetes.io/projected/46c916dc-b6ce-4e56-9ef3-e15d778d7173-kube-api-access-48zmv\") pod \"dns-operator-744455d44c-9xcvs\" (UID: \"46c916dc-b6ce-4e56-9ef3-e15d778d7173\") " pod="openshift-dns-operator/dns-operator-744455d44c-9xcvs" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475313 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c48b7dac-cabb-40fb-bb18-a587cd1a3184-profile-collector-cert\") pod \"olm-operator-6b444d44fb-twqzk\" (UID: \"c48b7dac-cabb-40fb-bb18-a587cd1a3184\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-twqzk" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475336 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f228cca-e005-4036-916e-10c7d1f9da1e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-sjlcb\" (UID: \"1f228cca-e005-4036-916e-10c7d1f9da1e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sjlcb" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475373 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/54ac7001-a300-4f21-b1dd-486db1c1e641-default-certificate\") pod \"router-default-5444994796-ws4t8\" (UID: \"54ac7001-a300-4f21-b1dd-486db1c1e641\") " pod="openshift-ingress/router-default-5444994796-ws4t8" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475394 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnzn6\" (UniqueName: \"kubernetes.io/projected/12be153a-7f4d-4521-b4d9-def127e51cd5-kube-api-access-wnzn6\") pod \"ingress-operator-5b745b69d9-zqxhr\" (UID: \"12be153a-7f4d-4521-b4d9-def127e51cd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqxhr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475418 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dcsp\" (UniqueName: \"kubernetes.io/projected/bbb59932-429f-403e-8897-a4c7a778d7e2-kube-api-access-8dcsp\") pod \"machine-config-server-xk5tk\" (UID: \"bbb59932-429f-403e-8897-a4c7a778d7e2\") " pod="openshift-machine-config-operator/machine-config-server-xk5tk" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475452 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475475 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/17e0a9b8-d746-4a17-a424-122b5c30ce75-registry-certificates\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475497 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1061190-bb41-45c6-99f8-977e2dd1df5b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9knxn\" (UID: \"b1061190-bb41-45c6-99f8-977e2dd1df5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9knxn" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475518 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/46c916dc-b6ce-4e56-9ef3-e15d778d7173-metrics-tls\") pod \"dns-operator-744455d44c-9xcvs\" (UID: \"46c916dc-b6ce-4e56-9ef3-e15d778d7173\") " pod="openshift-dns-operator/dns-operator-744455d44c-9xcvs" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475541 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8glq\" (UniqueName: \"kubernetes.io/projected/0e55c6a6-4c4a-4548-ba89-a3a34c49124d-kube-api-access-r8glq\") pod \"machine-config-operator-74547568cd-bbvkt\" (UID: \"0e55c6a6-4c4a-4548-ba89-a3a34c49124d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bbvkt" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475563 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp2dx\" (UniqueName: \"kubernetes.io/projected/b8462a37-7879-45ab-91e6-29ae835c9771-kube-api-access-gp2dx\") pod \"packageserver-d55dfcdfc-l26k6\" (UID: \"b8462a37-7879-45ab-91e6-29ae835c9771\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l26k6" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475583 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f228cca-e005-4036-916e-10c7d1f9da1e-config\") pod \"kube-controller-manager-operator-78b949d7b-sjlcb\" (UID: \"1f228cca-e005-4036-916e-10c7d1f9da1e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sjlcb" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475607 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa7ef61d-c907-4632-b3c7-e25c01f5d2b5-serving-cert\") pod \"etcd-operator-b45778765-q8sfr\" (UID: \"fa7ef61d-c907-4632-b3c7-e25c01f5d2b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8sfr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475628 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6af58b85-a73a-4b78-b663-5e996f555e93-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gcfs9\" (UID: \"6af58b85-a73a-4b78-b663-5e996f555e93\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gcfs9" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475675 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/17e0a9b8-d746-4a17-a424-122b5c30ce75-registry-tls\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475698 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dd698da2-83bb-4051-b007-bbf97441a6b1-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tphbb\" (UID: \"dd698da2-83bb-4051-b007-bbf97441a6b1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tphbb" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475721 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ed1e0a7a-7a77-4343-8c33-e921e149ddab-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hcgdz\" (UID: \"ed1e0a7a-7a77-4343-8c33-e921e149ddab\") " pod="openshift-marketplace/marketplace-operator-79b997595-hcgdz" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475741 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/12be153a-7f4d-4521-b4d9-def127e51cd5-metrics-tls\") pod \"ingress-operator-5b745b69d9-zqxhr\" (UID: \"12be153a-7f4d-4521-b4d9-def127e51cd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqxhr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475777 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/17e0a9b8-d746-4a17-a424-122b5c30ce75-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475799 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzv5c\" (UniqueName: \"kubernetes.io/projected/c48b7dac-cabb-40fb-bb18-a587cd1a3184-kube-api-access-gzv5c\") pod \"olm-operator-6b444d44fb-twqzk\" (UID: \"c48b7dac-cabb-40fb-bb18-a587cd1a3184\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-twqzk" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475864 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5g4z\" (UniqueName: \"kubernetes.io/projected/b1061190-bb41-45c6-99f8-977e2dd1df5b-kube-api-access-k5g4z\") pod \"multus-admission-controller-857f4d67dd-9knxn\" (UID: \"b1061190-bb41-45c6-99f8-977e2dd1df5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9knxn" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475929 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14ac26da-2122-48f7-8e83-1acb41418490-config\") pod \"kube-apiserver-operator-766d6c64bb-k42q6\" (UID: \"14ac26da-2122-48f7-8e83-1acb41418490\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k42q6" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475952 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2250ca28-5f36-4c7f-aca3-b71131272a51-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mz8xv\" (UID: \"2250ca28-5f36-4c7f-aca3-b71131272a51\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mz8xv" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475973 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15df38d4-e9b0-433c-8f33-5b5a9a14ca0f-config\") pod \"service-ca-operator-777779d784-l9zft\" (UID: \"15df38d4-e9b0-433c-8f33-5b5a9a14ca0f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l9zft" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.475996 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bbb59932-429f-403e-8897-a4c7a778d7e2-node-bootstrap-token\") pod \"machine-config-server-xk5tk\" (UID: \"bbb59932-429f-403e-8897-a4c7a778d7e2\") " pod="openshift-machine-config-operator/machine-config-server-xk5tk" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.476062 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d5d3db1a-56ec-426e-b14b-8be9a13c6347-profile-collector-cert\") pod \"catalog-operator-68c6474976-76wk9\" (UID: \"d5d3db1a-56ec-426e-b14b-8be9a13c6347\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76wk9" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.476177 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9zv7\" (UniqueName: \"kubernetes.io/projected/15df38d4-e9b0-433c-8f33-5b5a9a14ca0f-kube-api-access-n9zv7\") pod \"service-ca-operator-777779d784-l9zft\" (UID: \"15df38d4-e9b0-433c-8f33-5b5a9a14ca0f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l9zft" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.476208 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d5d3db1a-56ec-426e-b14b-8be9a13c6347-srv-cert\") pod \"catalog-operator-68c6474976-76wk9\" (UID: \"d5d3db1a-56ec-426e-b14b-8be9a13c6347\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76wk9" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.476278 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15df38d4-e9b0-433c-8f33-5b5a9a14ca0f-serving-cert\") pod \"service-ca-operator-777779d784-l9zft\" (UID: \"15df38d4-e9b0-433c-8f33-5b5a9a14ca0f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l9zft" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.476303 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgb6g\" (UniqueName: \"kubernetes.io/projected/dd698da2-83bb-4051-b007-bbf97441a6b1-kube-api-access-vgb6g\") pod \"machine-config-controller-84d6567774-tphbb\" (UID: \"dd698da2-83bb-4051-b007-bbf97441a6b1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tphbb" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.476374 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7866b3d9-f32e-4b75-bade-36891f08ae41-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vlwt4\" (UID: \"7866b3d9-f32e-4b75-bade-36891f08ae41\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vlwt4" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.476497 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f228cca-e005-4036-916e-10c7d1f9da1e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-sjlcb\" (UID: \"1f228cca-e005-4036-916e-10c7d1f9da1e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sjlcb" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.476553 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fa7ef61d-c907-4632-b3c7-e25c01f5d2b5-etcd-service-ca\") pod \"etcd-operator-b45778765-q8sfr\" (UID: \"fa7ef61d-c907-4632-b3c7-e25c01f5d2b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8sfr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.476670 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz6lz\" (UniqueName: \"kubernetes.io/projected/6ad1915a-9298-4aba-928b-5d3c7d57a7bb-kube-api-access-zz6lz\") pod \"control-plane-machine-set-operator-78cbb6b69f-kn6kp\" (UID: \"6ad1915a-9298-4aba-928b-5d3c7d57a7bb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kn6kp" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.476716 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17e0a9b8-d746-4a17-a424-122b5c30ce75-trusted-ca\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.476771 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sthq8\" (UniqueName: \"kubernetes.io/projected/17e0a9b8-d746-4a17-a424-122b5c30ce75-kube-api-access-sthq8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.476794 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b8462a37-7879-45ab-91e6-29ae835c9771-apiservice-cert\") pod \"packageserver-d55dfcdfc-l26k6\" (UID: \"b8462a37-7879-45ab-91e6-29ae835c9771\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l26k6" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.476917 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq7jp\" (UniqueName: \"kubernetes.io/projected/ed1e0a7a-7a77-4343-8c33-e921e149ddab-kube-api-access-vq7jp\") pod \"marketplace-operator-79b997595-hcgdz\" (UID: \"ed1e0a7a-7a77-4343-8c33-e921e149ddab\") " pod="openshift-marketplace/marketplace-operator-79b997595-hcgdz" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.477023 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bbb59932-429f-403e-8897-a4c7a778d7e2-certs\") pod \"machine-config-server-xk5tk\" (UID: \"bbb59932-429f-403e-8897-a4c7a778d7e2\") " pod="openshift-machine-config-operator/machine-config-server-xk5tk" Dec 05 19:06:09 crc kubenswrapper[4828]: E1205 19:06:09.485675 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:09.985653443 +0000 UTC m=+147.880875759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.543456 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-f24wr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.567607 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7gd82"] Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.577480 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.577665 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f228cca-e005-4036-916e-10c7d1f9da1e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-sjlcb\" (UID: \"1f228cca-e005-4036-916e-10c7d1f9da1e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sjlcb" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.577697 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fa7ef61d-c907-4632-b3c7-e25c01f5d2b5-etcd-service-ca\") pod \"etcd-operator-b45778765-q8sfr\" (UID: \"fa7ef61d-c907-4632-b3c7-e25c01f5d2b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8sfr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.577722 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz6lz\" (UniqueName: \"kubernetes.io/projected/6ad1915a-9298-4aba-928b-5d3c7d57a7bb-kube-api-access-zz6lz\") pod \"control-plane-machine-set-operator-78cbb6b69f-kn6kp\" (UID: \"6ad1915a-9298-4aba-928b-5d3c7d57a7bb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kn6kp" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.577746 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17e0a9b8-d746-4a17-a424-122b5c30ce75-trusted-ca\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.577765 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sthq8\" (UniqueName: \"kubernetes.io/projected/17e0a9b8-d746-4a17-a424-122b5c30ce75-kube-api-access-sthq8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.577786 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b8462a37-7879-45ab-91e6-29ae835c9771-apiservice-cert\") pod \"packageserver-d55dfcdfc-l26k6\" (UID: \"b8462a37-7879-45ab-91e6-29ae835c9771\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l26k6" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.577809 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq7jp\" (UniqueName: \"kubernetes.io/projected/ed1e0a7a-7a77-4343-8c33-e921e149ddab-kube-api-access-vq7jp\") pod \"marketplace-operator-79b997595-hcgdz\" (UID: \"ed1e0a7a-7a77-4343-8c33-e921e149ddab\") " pod="openshift-marketplace/marketplace-operator-79b997595-hcgdz" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.577849 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bbb59932-429f-403e-8897-a4c7a778d7e2-certs\") pod \"machine-config-server-xk5tk\" (UID: \"bbb59932-429f-403e-8897-a4c7a778d7e2\") " pod="openshift-machine-config-operator/machine-config-server-xk5tk" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.577875 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhh9m\" (UniqueName: \"kubernetes.io/projected/55005e7b-f061-4065-8e7c-dd418b7fd072-kube-api-access-fhh9m\") pod \"csi-hostpathplugin-gvjlq\" (UID: \"55005e7b-f061-4065-8e7c-dd418b7fd072\") " pod="hostpath-provisioner/csi-hostpathplugin-gvjlq" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.577899 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7866b3d9-f32e-4b75-bade-36891f08ae41-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vlwt4\" (UID: \"7866b3d9-f32e-4b75-bade-36891f08ae41\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vlwt4" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.577921 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5ed41b4-64e6-407a-b3a5-104f2b97b008-secret-volume\") pod \"collect-profiles-29416020-s5gks\" (UID: \"b5ed41b4-64e6-407a-b3a5-104f2b97b008\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416020-s5gks" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.577940 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5ed41b4-64e6-407a-b3a5-104f2b97b008-config-volume\") pod \"collect-profiles-29416020-s5gks\" (UID: \"b5ed41b4-64e6-407a-b3a5-104f2b97b008\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416020-s5gks" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.577961 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ad1915a-9298-4aba-928b-5d3c7d57a7bb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kn6kp\" (UID: \"6ad1915a-9298-4aba-928b-5d3c7d57a7bb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kn6kp" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.577983 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/55005e7b-f061-4065-8e7c-dd418b7fd072-plugins-dir\") pod \"csi-hostpathplugin-gvjlq\" (UID: \"55005e7b-f061-4065-8e7c-dd418b7fd072\") " pod="hostpath-provisioner/csi-hostpathplugin-gvjlq" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.578004 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8c8aaf1e-1192-4552-9de6-614d8e325b7b-signing-key\") pod \"service-ca-9c57cc56f-qp52c\" (UID: \"8c8aaf1e-1192-4552-9de6-614d8e325b7b\") " pod="openshift-service-ca/service-ca-9c57cc56f-qp52c" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.578027 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wz5f\" (UniqueName: \"kubernetes.io/projected/2a401879-c671-46e3-a1ba-d6dbffb5ca5d-kube-api-access-5wz5f\") pod \"dns-default-jcw75\" (UID: \"2a401879-c671-46e3-a1ba-d6dbffb5ca5d\") " pod="openshift-dns/dns-default-jcw75" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.578048 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8c8aaf1e-1192-4552-9de6-614d8e325b7b-signing-cabundle\") pod \"service-ca-9c57cc56f-qp52c\" (UID: \"8c8aaf1e-1192-4552-9de6-614d8e325b7b\") " pod="openshift-service-ca/service-ca-9c57cc56f-qp52c" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.578069 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fa7ef61d-c907-4632-b3c7-e25c01f5d2b5-etcd-client\") pod \"etcd-operator-b45778765-q8sfr\" (UID: \"fa7ef61d-c907-4632-b3c7-e25c01f5d2b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8sfr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.578090 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/55005e7b-f061-4065-8e7c-dd418b7fd072-socket-dir\") pod \"csi-hostpathplugin-gvjlq\" (UID: \"55005e7b-f061-4065-8e7c-dd418b7fd072\") " pod="hostpath-provisioner/csi-hostpathplugin-gvjlq" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.578116 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0e55c6a6-4c4a-4548-ba89-a3a34c49124d-images\") pod \"machine-config-operator-74547568cd-bbvkt\" (UID: \"0e55c6a6-4c4a-4548-ba89-a3a34c49124d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bbvkt" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.578136 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnncb\" (UniqueName: \"kubernetes.io/projected/8c8aaf1e-1192-4552-9de6-614d8e325b7b-kube-api-access-bnncb\") pod \"service-ca-9c57cc56f-qp52c\" (UID: \"8c8aaf1e-1192-4552-9de6-614d8e325b7b\") " pod="openshift-service-ca/service-ca-9c57cc56f-qp52c" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.578163 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/12be153a-7f4d-4521-b4d9-def127e51cd5-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zqxhr\" (UID: \"12be153a-7f4d-4521-b4d9-def127e51cd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqxhr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.578187 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f9ht\" (UniqueName: \"kubernetes.io/projected/70fb8e9d-6b1f-40ff-9e85-3ed28f99f5d5-kube-api-access-5f9ht\") pod \"ingress-canary-dz45b\" (UID: \"70fb8e9d-6b1f-40ff-9e85-3ed28f99f5d5\") " pod="openshift-ingress-canary/ingress-canary-dz45b" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.578213 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e55c6a6-4c4a-4548-ba89-a3a34c49124d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bbvkt\" (UID: \"0e55c6a6-4c4a-4548-ba89-a3a34c49124d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bbvkt" Dec 05 19:06:09 crc kubenswrapper[4828]: E1205 19:06:09.578287 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:10.078231034 +0000 UTC m=+147.973453350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.578316 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ed1e0a7a-7a77-4343-8c33-e921e149ddab-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hcgdz\" (UID: \"ed1e0a7a-7a77-4343-8c33-e921e149ddab\") " pod="openshift-marketplace/marketplace-operator-79b997595-hcgdz" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.578385 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbnvr\" (UniqueName: \"kubernetes.io/projected/2250ca28-5f36-4c7f-aca3-b71131272a51-kube-api-access-vbnvr\") pod \"kube-storage-version-migrator-operator-b67b599dd-mz8xv\" (UID: \"2250ca28-5f36-4c7f-aca3-b71131272a51\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mz8xv" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.578447 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54ac7001-a300-4f21-b1dd-486db1c1e641-metrics-certs\") pod \"router-default-5444994796-ws4t8\" (UID: \"54ac7001-a300-4f21-b1dd-486db1c1e641\") " pod="openshift-ingress/router-default-5444994796-ws4t8" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.578542 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvxdl\" (UniqueName: \"kubernetes.io/projected/b5ed41b4-64e6-407a-b3a5-104f2b97b008-kube-api-access-tvxdl\") pod \"collect-profiles-29416020-s5gks\" (UID: \"b5ed41b4-64e6-407a-b3a5-104f2b97b008\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416020-s5gks" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.578569 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/17e0a9b8-d746-4a17-a424-122b5c30ce75-bound-sa-token\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.578627 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/54ac7001-a300-4f21-b1dd-486db1c1e641-stats-auth\") pod \"router-default-5444994796-ws4t8\" (UID: \"54ac7001-a300-4f21-b1dd-486db1c1e641\") " pod="openshift-ingress/router-default-5444994796-ws4t8" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.578650 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dd698da2-83bb-4051-b007-bbf97441a6b1-proxy-tls\") pod \"machine-config-controller-84d6567774-tphbb\" (UID: \"dd698da2-83bb-4051-b007-bbf97441a6b1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tphbb" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.579914 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8c8aaf1e-1192-4552-9de6-614d8e325b7b-signing-cabundle\") pod \"service-ca-9c57cc56f-qp52c\" (UID: \"8c8aaf1e-1192-4552-9de6-614d8e325b7b\") " pod="openshift-service-ca/service-ca-9c57cc56f-qp52c" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.582078 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqmj4\" (UniqueName: \"kubernetes.io/projected/d5d3db1a-56ec-426e-b14b-8be9a13c6347-kube-api-access-dqmj4\") pod \"catalog-operator-68c6474976-76wk9\" (UID: \"d5d3db1a-56ec-426e-b14b-8be9a13c6347\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76wk9" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.582121 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2a401879-c671-46e3-a1ba-d6dbffb5ca5d-metrics-tls\") pod \"dns-default-jcw75\" (UID: \"2a401879-c671-46e3-a1ba-d6dbffb5ca5d\") " pod="openshift-dns/dns-default-jcw75" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.582175 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a401879-c671-46e3-a1ba-d6dbffb5ca5d-config-volume\") pod \"dns-default-jcw75\" (UID: \"2a401879-c671-46e3-a1ba-d6dbffb5ca5d\") " pod="openshift-dns/dns-default-jcw75" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.582200 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12be153a-7f4d-4521-b4d9-def127e51cd5-trusted-ca\") pod \"ingress-operator-5b745b69d9-zqxhr\" (UID: \"12be153a-7f4d-4521-b4d9-def127e51cd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqxhr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.582228 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e55c6a6-4c4a-4548-ba89-a3a34c49124d-proxy-tls\") pod \"machine-config-operator-74547568cd-bbvkt\" (UID: \"0e55c6a6-4c4a-4548-ba89-a3a34c49124d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bbvkt" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.582250 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b8462a37-7879-45ab-91e6-29ae835c9771-webhook-cert\") pod \"packageserver-d55dfcdfc-l26k6\" (UID: \"b8462a37-7879-45ab-91e6-29ae835c9771\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l26k6" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.582319 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/55005e7b-f061-4065-8e7c-dd418b7fd072-csi-data-dir\") pod \"csi-hostpathplugin-gvjlq\" (UID: \"55005e7b-f061-4065-8e7c-dd418b7fd072\") " pod="hostpath-provisioner/csi-hostpathplugin-gvjlq" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.582351 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/55005e7b-f061-4065-8e7c-dd418b7fd072-mountpoint-dir\") pod \"csi-hostpathplugin-gvjlq\" (UID: \"55005e7b-f061-4065-8e7c-dd418b7fd072\") " pod="hostpath-provisioner/csi-hostpathplugin-gvjlq" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.582374 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2250ca28-5f36-4c7f-aca3-b71131272a51-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mz8xv\" (UID: \"2250ca28-5f36-4c7f-aca3-b71131272a51\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mz8xv" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.582391 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa7ef61d-c907-4632-b3c7-e25c01f5d2b5-config\") pod \"etcd-operator-b45778765-q8sfr\" (UID: \"fa7ef61d-c907-4632-b3c7-e25c01f5d2b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8sfr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.582429 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14ac26da-2122-48f7-8e83-1acb41418490-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-k42q6\" (UID: \"14ac26da-2122-48f7-8e83-1acb41418490\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k42q6" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.582449 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bnrq\" (UniqueName: \"kubernetes.io/projected/fa7ef61d-c907-4632-b3c7-e25c01f5d2b5-kube-api-access-5bnrq\") pod \"etcd-operator-b45778765-q8sfr\" (UID: \"fa7ef61d-c907-4632-b3c7-e25c01f5d2b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8sfr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.582466 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c48b7dac-cabb-40fb-bb18-a587cd1a3184-srv-cert\") pod \"olm-operator-6b444d44fb-twqzk\" (UID: \"c48b7dac-cabb-40fb-bb18-a587cd1a3184\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-twqzk" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.582484 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fjhr\" (UniqueName: \"kubernetes.io/projected/54ac7001-a300-4f21-b1dd-486db1c1e641-kube-api-access-4fjhr\") pod \"router-default-5444994796-ws4t8\" (UID: \"54ac7001-a300-4f21-b1dd-486db1c1e641\") " pod="openshift-ingress/router-default-5444994796-ws4t8" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.582502 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b8462a37-7879-45ab-91e6-29ae835c9771-tmpfs\") pod \"packageserver-d55dfcdfc-l26k6\" (UID: \"b8462a37-7879-45ab-91e6-29ae835c9771\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l26k6" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.582524 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7866b3d9-f32e-4b75-bade-36891f08ae41-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vlwt4\" (UID: \"7866b3d9-f32e-4b75-bade-36891f08ae41\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vlwt4" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.588589 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54ac7001-a300-4f21-b1dd-486db1c1e641-service-ca-bundle\") pod \"router-default-5444994796-ws4t8\" (UID: \"54ac7001-a300-4f21-b1dd-486db1c1e641\") " pod="openshift-ingress/router-default-5444994796-ws4t8" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.588617 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqwdw\" (UniqueName: \"kubernetes.io/projected/6c46f04e-0d87-4198-8eca-6000b06409c0-kube-api-access-gqwdw\") pod \"migrator-59844c95c7-rpjqf\" (UID: \"6c46f04e-0d87-4198-8eca-6000b06409c0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rpjqf" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.588660 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14ac26da-2122-48f7-8e83-1acb41418490-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-k42q6\" (UID: \"14ac26da-2122-48f7-8e83-1acb41418490\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k42q6" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.588681 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70fb8e9d-6b1f-40ff-9e85-3ed28f99f5d5-cert\") pod \"ingress-canary-dz45b\" (UID: \"70fb8e9d-6b1f-40ff-9e85-3ed28f99f5d5\") " pod="openshift-ingress-canary/ingress-canary-dz45b" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.588723 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qcld\" (UniqueName: \"kubernetes.io/projected/6af58b85-a73a-4b78-b663-5e996f555e93-kube-api-access-2qcld\") pod \"package-server-manager-789f6589d5-gcfs9\" (UID: \"6af58b85-a73a-4b78-b663-5e996f555e93\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gcfs9" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.588746 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/17e0a9b8-d746-4a17-a424-122b5c30ce75-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.588766 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fa7ef61d-c907-4632-b3c7-e25c01f5d2b5-etcd-ca\") pod \"etcd-operator-b45778765-q8sfr\" (UID: \"fa7ef61d-c907-4632-b3c7-e25c01f5d2b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8sfr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.588803 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48zmv\" (UniqueName: \"kubernetes.io/projected/46c916dc-b6ce-4e56-9ef3-e15d778d7173-kube-api-access-48zmv\") pod \"dns-operator-744455d44c-9xcvs\" (UID: \"46c916dc-b6ce-4e56-9ef3-e15d778d7173\") " pod="openshift-dns-operator/dns-operator-744455d44c-9xcvs" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.588848 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c48b7dac-cabb-40fb-bb18-a587cd1a3184-profile-collector-cert\") pod \"olm-operator-6b444d44fb-twqzk\" (UID: \"c48b7dac-cabb-40fb-bb18-a587cd1a3184\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-twqzk" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.588869 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f228cca-e005-4036-916e-10c7d1f9da1e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-sjlcb\" (UID: \"1f228cca-e005-4036-916e-10c7d1f9da1e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sjlcb" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.588918 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/54ac7001-a300-4f21-b1dd-486db1c1e641-default-certificate\") pod \"router-default-5444994796-ws4t8\" (UID: \"54ac7001-a300-4f21-b1dd-486db1c1e641\") " pod="openshift-ingress/router-default-5444994796-ws4t8" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.588939 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnzn6\" (UniqueName: \"kubernetes.io/projected/12be153a-7f4d-4521-b4d9-def127e51cd5-kube-api-access-wnzn6\") pod \"ingress-operator-5b745b69d9-zqxhr\" (UID: \"12be153a-7f4d-4521-b4d9-def127e51cd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqxhr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.588960 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcsp\" (UniqueName: \"kubernetes.io/projected/bbb59932-429f-403e-8897-a4c7a778d7e2-kube-api-access-8dcsp\") pod \"machine-config-server-xk5tk\" (UID: \"bbb59932-429f-403e-8897-a4c7a778d7e2\") " pod="openshift-machine-config-operator/machine-config-server-xk5tk" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.589002 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.589021 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/17e0a9b8-d746-4a17-a424-122b5c30ce75-registry-certificates\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.589036 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1061190-bb41-45c6-99f8-977e2dd1df5b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9knxn\" (UID: \"b1061190-bb41-45c6-99f8-977e2dd1df5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9knxn" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.589053 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/46c916dc-b6ce-4e56-9ef3-e15d778d7173-metrics-tls\") pod \"dns-operator-744455d44c-9xcvs\" (UID: \"46c916dc-b6ce-4e56-9ef3-e15d778d7173\") " pod="openshift-dns-operator/dns-operator-744455d44c-9xcvs" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.589091 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8glq\" (UniqueName: \"kubernetes.io/projected/0e55c6a6-4c4a-4548-ba89-a3a34c49124d-kube-api-access-r8glq\") pod \"machine-config-operator-74547568cd-bbvkt\" (UID: \"0e55c6a6-4c4a-4548-ba89-a3a34c49124d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bbvkt" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.589108 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp2dx\" (UniqueName: \"kubernetes.io/projected/b8462a37-7879-45ab-91e6-29ae835c9771-kube-api-access-gp2dx\") pod \"packageserver-d55dfcdfc-l26k6\" (UID: \"b8462a37-7879-45ab-91e6-29ae835c9771\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l26k6" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.589126 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f228cca-e005-4036-916e-10c7d1f9da1e-config\") pod \"kube-controller-manager-operator-78b949d7b-sjlcb\" (UID: \"1f228cca-e005-4036-916e-10c7d1f9da1e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sjlcb" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.589163 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa7ef61d-c907-4632-b3c7-e25c01f5d2b5-serving-cert\") pod \"etcd-operator-b45778765-q8sfr\" (UID: \"fa7ef61d-c907-4632-b3c7-e25c01f5d2b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8sfr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.589181 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6af58b85-a73a-4b78-b663-5e996f555e93-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gcfs9\" (UID: \"6af58b85-a73a-4b78-b663-5e996f555e93\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gcfs9" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.589199 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/17e0a9b8-d746-4a17-a424-122b5c30ce75-registry-tls\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.589241 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dd698da2-83bb-4051-b007-bbf97441a6b1-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tphbb\" (UID: \"dd698da2-83bb-4051-b007-bbf97441a6b1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tphbb" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.589263 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ed1e0a7a-7a77-4343-8c33-e921e149ddab-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hcgdz\" (UID: \"ed1e0a7a-7a77-4343-8c33-e921e149ddab\") " pod="openshift-marketplace/marketplace-operator-79b997595-hcgdz" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.589277 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/12be153a-7f4d-4521-b4d9-def127e51cd5-metrics-tls\") pod \"ingress-operator-5b745b69d9-zqxhr\" (UID: \"12be153a-7f4d-4521-b4d9-def127e51cd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqxhr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.589293 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzv5c\" (UniqueName: \"kubernetes.io/projected/c48b7dac-cabb-40fb-bb18-a587cd1a3184-kube-api-access-gzv5c\") pod \"olm-operator-6b444d44fb-twqzk\" (UID: \"c48b7dac-cabb-40fb-bb18-a587cd1a3184\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-twqzk" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.589335 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/17e0a9b8-d746-4a17-a424-122b5c30ce75-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.589352 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/55005e7b-f061-4065-8e7c-dd418b7fd072-registration-dir\") pod \"csi-hostpathplugin-gvjlq\" (UID: \"55005e7b-f061-4065-8e7c-dd418b7fd072\") " pod="hostpath-provisioner/csi-hostpathplugin-gvjlq" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.589475 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5g4z\" (UniqueName: \"kubernetes.io/projected/b1061190-bb41-45c6-99f8-977e2dd1df5b-kube-api-access-k5g4z\") pod \"multus-admission-controller-857f4d67dd-9knxn\" (UID: \"b1061190-bb41-45c6-99f8-977e2dd1df5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9knxn" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.589501 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14ac26da-2122-48f7-8e83-1acb41418490-config\") pod \"kube-apiserver-operator-766d6c64bb-k42q6\" (UID: \"14ac26da-2122-48f7-8e83-1acb41418490\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k42q6" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.589517 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bbb59932-429f-403e-8897-a4c7a778d7e2-node-bootstrap-token\") pod \"machine-config-server-xk5tk\" (UID: \"bbb59932-429f-403e-8897-a4c7a778d7e2\") " pod="openshift-machine-config-operator/machine-config-server-xk5tk" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.589676 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2250ca28-5f36-4c7f-aca3-b71131272a51-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mz8xv\" (UID: \"2250ca28-5f36-4c7f-aca3-b71131272a51\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mz8xv" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.589693 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15df38d4-e9b0-433c-8f33-5b5a9a14ca0f-config\") pod \"service-ca-operator-777779d784-l9zft\" (UID: \"15df38d4-e9b0-433c-8f33-5b5a9a14ca0f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l9zft" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.589725 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq99m" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.589816 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9zv7\" (UniqueName: \"kubernetes.io/projected/15df38d4-e9b0-433c-8f33-5b5a9a14ca0f-kube-api-access-n9zv7\") pod \"service-ca-operator-777779d784-l9zft\" (UID: \"15df38d4-e9b0-433c-8f33-5b5a9a14ca0f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l9zft" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.589849 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d5d3db1a-56ec-426e-b14b-8be9a13c6347-srv-cert\") pod \"catalog-operator-68c6474976-76wk9\" (UID: \"d5d3db1a-56ec-426e-b14b-8be9a13c6347\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76wk9" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.589864 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d5d3db1a-56ec-426e-b14b-8be9a13c6347-profile-collector-cert\") pod \"catalog-operator-68c6474976-76wk9\" (UID: \"d5d3db1a-56ec-426e-b14b-8be9a13c6347\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76wk9" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.589999 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15df38d4-e9b0-433c-8f33-5b5a9a14ca0f-serving-cert\") pod \"service-ca-operator-777779d784-l9zft\" (UID: \"15df38d4-e9b0-433c-8f33-5b5a9a14ca0f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l9zft" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.590020 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgb6g\" (UniqueName: \"kubernetes.io/projected/dd698da2-83bb-4051-b007-bbf97441a6b1-kube-api-access-vgb6g\") pod \"machine-config-controller-84d6567774-tphbb\" (UID: \"dd698da2-83bb-4051-b007-bbf97441a6b1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tphbb" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.590173 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7866b3d9-f32e-4b75-bade-36891f08ae41-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vlwt4\" (UID: \"7866b3d9-f32e-4b75-bade-36891f08ae41\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vlwt4" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.591203 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ad1915a-9298-4aba-928b-5d3c7d57a7bb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kn6kp\" (UID: \"6ad1915a-9298-4aba-928b-5d3c7d57a7bb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kn6kp" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.591634 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fa7ef61d-c907-4632-b3c7-e25c01f5d2b5-etcd-client\") pod \"etcd-operator-b45778765-q8sfr\" (UID: \"fa7ef61d-c907-4632-b3c7-e25c01f5d2b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8sfr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.591697 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b8462a37-7879-45ab-91e6-29ae835c9771-apiservice-cert\") pod \"packageserver-d55dfcdfc-l26k6\" (UID: \"b8462a37-7879-45ab-91e6-29ae835c9771\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l26k6" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.582908 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5ed41b4-64e6-407a-b3a5-104f2b97b008-config-volume\") pod \"collect-profiles-29416020-s5gks\" (UID: \"b5ed41b4-64e6-407a-b3a5-104f2b97b008\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416020-s5gks" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.588284 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17e0a9b8-d746-4a17-a424-122b5c30ce75-trusted-ca\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.588510 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fa7ef61d-c907-4632-b3c7-e25c01f5d2b5-etcd-service-ca\") pod \"etcd-operator-b45778765-q8sfr\" (UID: \"fa7ef61d-c907-4632-b3c7-e25c01f5d2b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8sfr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.591892 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54ac7001-a300-4f21-b1dd-486db1c1e641-metrics-certs\") pod \"router-default-5444994796-ws4t8\" (UID: \"54ac7001-a300-4f21-b1dd-486db1c1e641\") " pod="openshift-ingress/router-default-5444994796-ws4t8" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.592369 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dd698da2-83bb-4051-b007-bbf97441a6b1-proxy-tls\") pod \"machine-config-controller-84d6567774-tphbb\" (UID: \"dd698da2-83bb-4051-b007-bbf97441a6b1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tphbb" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.593452 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b8462a37-7879-45ab-91e6-29ae835c9771-tmpfs\") pod \"packageserver-d55dfcdfc-l26k6\" (UID: \"b8462a37-7879-45ab-91e6-29ae835c9771\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l26k6" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.609006 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2250ca28-5f36-4c7f-aca3-b71131272a51-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mz8xv\" (UID: \"2250ca28-5f36-4c7f-aca3-b71131272a51\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mz8xv" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.610434 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.621314 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15df38d4-e9b0-433c-8f33-5b5a9a14ca0f-config\") pod \"service-ca-operator-777779d784-l9zft\" (UID: \"15df38d4-e9b0-433c-8f33-5b5a9a14ca0f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l9zft" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.611163 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14ac26da-2122-48f7-8e83-1acb41418490-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-k42q6\" (UID: \"14ac26da-2122-48f7-8e83-1acb41418490\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k42q6" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.611229 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ed1e0a7a-7a77-4343-8c33-e921e149ddab-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hcgdz\" (UID: \"ed1e0a7a-7a77-4343-8c33-e921e149ddab\") " pod="openshift-marketplace/marketplace-operator-79b997595-hcgdz" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.612543 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0e55c6a6-4c4a-4548-ba89-a3a34c49124d-images\") pod \"machine-config-operator-74547568cd-bbvkt\" (UID: \"0e55c6a6-4c4a-4548-ba89-a3a34c49124d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bbvkt" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.616856 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b6wdx"] Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.616533 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa7ef61d-c907-4632-b3c7-e25c01f5d2b5-config\") pod \"etcd-operator-b45778765-q8sfr\" (UID: \"fa7ef61d-c907-4632-b3c7-e25c01f5d2b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8sfr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.617418 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7866b3d9-f32e-4b75-bade-36891f08ae41-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vlwt4\" (UID: \"7866b3d9-f32e-4b75-bade-36891f08ae41\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vlwt4" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.618006 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/54ac7001-a300-4f21-b1dd-486db1c1e641-stats-auth\") pod \"router-default-5444994796-ws4t8\" (UID: \"54ac7001-a300-4f21-b1dd-486db1c1e641\") " pod="openshift-ingress/router-default-5444994796-ws4t8" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.618052 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e55c6a6-4c4a-4548-ba89-a3a34c49124d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bbvkt\" (UID: \"0e55c6a6-4c4a-4548-ba89-a3a34c49124d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bbvkt" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.618109 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8c8aaf1e-1192-4552-9de6-614d8e325b7b-signing-key\") pod \"service-ca-9c57cc56f-qp52c\" (UID: \"8c8aaf1e-1192-4552-9de6-614d8e325b7b\") " pod="openshift-service-ca/service-ca-9c57cc56f-qp52c" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.618484 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e55c6a6-4c4a-4548-ba89-a3a34c49124d-proxy-tls\") pod \"machine-config-operator-74547568cd-bbvkt\" (UID: \"0e55c6a6-4c4a-4548-ba89-a3a34c49124d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bbvkt" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.619572 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/17e0a9b8-d746-4a17-a424-122b5c30ce75-registry-certificates\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.621963 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f228cca-e005-4036-916e-10c7d1f9da1e-config\") pod \"kube-controller-manager-operator-78b949d7b-sjlcb\" (UID: \"1f228cca-e005-4036-916e-10c7d1f9da1e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sjlcb" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.621963 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8"] Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.622333 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2250ca28-5f36-4c7f-aca3-b71131272a51-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mz8xv\" (UID: \"2250ca28-5f36-4c7f-aca3-b71131272a51\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mz8xv" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.622526 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xdjfd" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.622714 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dd698da2-83bb-4051-b007-bbf97441a6b1-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tphbb\" (UID: \"dd698da2-83bb-4051-b007-bbf97441a6b1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tphbb" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.623214 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6af58b85-a73a-4b78-b663-5e996f555e93-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gcfs9\" (UID: \"6af58b85-a73a-4b78-b663-5e996f555e93\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gcfs9" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.625457 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d5d3db1a-56ec-426e-b14b-8be9a13c6347-srv-cert\") pod \"catalog-operator-68c6474976-76wk9\" (UID: \"d5d3db1a-56ec-426e-b14b-8be9a13c6347\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76wk9" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.625965 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/17e0a9b8-d746-4a17-a424-122b5c30ce75-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.626639 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fa7ef61d-c907-4632-b3c7-e25c01f5d2b5-etcd-ca\") pod \"etcd-operator-b45778765-q8sfr\" (UID: \"fa7ef61d-c907-4632-b3c7-e25c01f5d2b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8sfr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.628882 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1061190-bb41-45c6-99f8-977e2dd1df5b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9knxn\" (UID: \"b1061190-bb41-45c6-99f8-977e2dd1df5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9knxn" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.629410 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/46c916dc-b6ce-4e56-9ef3-e15d778d7173-metrics-tls\") pod \"dns-operator-744455d44c-9xcvs\" (UID: \"46c916dc-b6ce-4e56-9ef3-e15d778d7173\") " pod="openshift-dns-operator/dns-operator-744455d44c-9xcvs" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.610490 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12be153a-7f4d-4521-b4d9-def127e51cd5-trusted-ca\") pod \"ingress-operator-5b745b69d9-zqxhr\" (UID: \"12be153a-7f4d-4521-b4d9-def127e51cd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqxhr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.631462 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bbb59932-429f-403e-8897-a4c7a778d7e2-certs\") pod \"machine-config-server-xk5tk\" (UID: \"bbb59932-429f-403e-8897-a4c7a778d7e2\") " pod="openshift-machine-config-operator/machine-config-server-xk5tk" Dec 05 19:06:09 crc kubenswrapper[4828]: W1205 19:06:09.631713 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e8d4020_b5ce_4b96_ba2a_ce605a9a514b.slice/crio-8da360b769851add7d91a56f40242eecd0707958ad3b68435949366ef3a36b16 WatchSource:0}: Error finding container 8da360b769851add7d91a56f40242eecd0707958ad3b68435949366ef3a36b16: Status 404 returned error can't find the container with id 8da360b769851add7d91a56f40242eecd0707958ad3b68435949366ef3a36b16 Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.632551 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/17e0a9b8-d746-4a17-a424-122b5c30ce75-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.633087 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5ed41b4-64e6-407a-b3a5-104f2b97b008-secret-volume\") pod \"collect-profiles-29416020-s5gks\" (UID: \"b5ed41b4-64e6-407a-b3a5-104f2b97b008\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416020-s5gks" Dec 05 19:06:09 crc kubenswrapper[4828]: E1205 19:06:09.633958 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:10.13393873 +0000 UTC m=+148.029161036 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.634466 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b8462a37-7879-45ab-91e6-29ae835c9771-webhook-cert\") pod \"packageserver-d55dfcdfc-l26k6\" (UID: \"b8462a37-7879-45ab-91e6-29ae835c9771\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l26k6" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.635003 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c48b7dac-cabb-40fb-bb18-a587cd1a3184-profile-collector-cert\") pod \"olm-operator-6b444d44fb-twqzk\" (UID: \"c48b7dac-cabb-40fb-bb18-a587cd1a3184\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-twqzk" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.637140 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7866b3d9-f32e-4b75-bade-36891f08ae41-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vlwt4\" (UID: \"7866b3d9-f32e-4b75-bade-36891f08ae41\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vlwt4" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.637492 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f228cca-e005-4036-916e-10c7d1f9da1e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-sjlcb\" (UID: \"1f228cca-e005-4036-916e-10c7d1f9da1e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sjlcb" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.639783 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ed1e0a7a-7a77-4343-8c33-e921e149ddab-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hcgdz\" (UID: \"ed1e0a7a-7a77-4343-8c33-e921e149ddab\") " pod="openshift-marketplace/marketplace-operator-79b997595-hcgdz" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.641838 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c48b7dac-cabb-40fb-bb18-a587cd1a3184-srv-cert\") pod \"olm-operator-6b444d44fb-twqzk\" (UID: \"c48b7dac-cabb-40fb-bb18-a587cd1a3184\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-twqzk" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.645957 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/54ac7001-a300-4f21-b1dd-486db1c1e641-default-certificate\") pod \"router-default-5444994796-ws4t8\" (UID: \"54ac7001-a300-4f21-b1dd-486db1c1e641\") " pod="openshift-ingress/router-default-5444994796-ws4t8" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.650498 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14ac26da-2122-48f7-8e83-1acb41418490-config\") pod \"kube-apiserver-operator-766d6c64bb-k42q6\" (UID: \"14ac26da-2122-48f7-8e83-1acb41418490\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k42q6" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.656480 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15df38d4-e9b0-433c-8f33-5b5a9a14ca0f-serving-cert\") pod \"service-ca-operator-777779d784-l9zft\" (UID: \"15df38d4-e9b0-433c-8f33-5b5a9a14ca0f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l9zft" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.658402 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sthq8\" (UniqueName: \"kubernetes.io/projected/17e0a9b8-d746-4a17-a424-122b5c30ce75-kube-api-access-sthq8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.658467 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bbb59932-429f-403e-8897-a4c7a778d7e2-node-bootstrap-token\") pod \"machine-config-server-xk5tk\" (UID: \"bbb59932-429f-403e-8897-a4c7a778d7e2\") " pod="openshift-machine-config-operator/machine-config-server-xk5tk" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.658578 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/12be153a-7f4d-4521-b4d9-def127e51cd5-metrics-tls\") pod \"ingress-operator-5b745b69d9-zqxhr\" (UID: \"12be153a-7f4d-4521-b4d9-def127e51cd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqxhr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.659050 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa7ef61d-c907-4632-b3c7-e25c01f5d2b5-serving-cert\") pod \"etcd-operator-b45778765-q8sfr\" (UID: \"fa7ef61d-c907-4632-b3c7-e25c01f5d2b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8sfr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.660296 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d5d3db1a-56ec-426e-b14b-8be9a13c6347-profile-collector-cert\") pod \"catalog-operator-68c6474976-76wk9\" (UID: \"d5d3db1a-56ec-426e-b14b-8be9a13c6347\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76wk9" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.661068 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/17e0a9b8-d746-4a17-a424-122b5c30ce75-registry-tls\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.662394 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/12be153a-7f4d-4521-b4d9-def127e51cd5-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zqxhr\" (UID: \"12be153a-7f4d-4521-b4d9-def127e51cd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqxhr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.663095 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnncb\" (UniqueName: \"kubernetes.io/projected/8c8aaf1e-1192-4552-9de6-614d8e325b7b-kube-api-access-bnncb\") pod \"service-ca-9c57cc56f-qp52c\" (UID: \"8c8aaf1e-1192-4552-9de6-614d8e325b7b\") " pod="openshift-service-ca/service-ca-9c57cc56f-qp52c" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.684260 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvxdl\" (UniqueName: \"kubernetes.io/projected/b5ed41b4-64e6-407a-b3a5-104f2b97b008-kube-api-access-tvxdl\") pod \"collect-profiles-29416020-s5gks\" (UID: \"b5ed41b4-64e6-407a-b3a5-104f2b97b008\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416020-s5gks" Dec 05 19:06:09 crc kubenswrapper[4828]: E1205 19:06:09.696664 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:10.19663815 +0000 UTC m=+148.091860446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.696686 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.696978 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2a401879-c671-46e3-a1ba-d6dbffb5ca5d-metrics-tls\") pod \"dns-default-jcw75\" (UID: \"2a401879-c671-46e3-a1ba-d6dbffb5ca5d\") " pod="openshift-dns/dns-default-jcw75" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.697010 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a401879-c671-46e3-a1ba-d6dbffb5ca5d-config-volume\") pod \"dns-default-jcw75\" (UID: \"2a401879-c671-46e3-a1ba-d6dbffb5ca5d\") " pod="openshift-dns/dns-default-jcw75" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.697055 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/55005e7b-f061-4065-8e7c-dd418b7fd072-csi-data-dir\") pod \"csi-hostpathplugin-gvjlq\" (UID: \"55005e7b-f061-4065-8e7c-dd418b7fd072\") " pod="hostpath-provisioner/csi-hostpathplugin-gvjlq" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.697089 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/55005e7b-f061-4065-8e7c-dd418b7fd072-mountpoint-dir\") pod \"csi-hostpathplugin-gvjlq\" (UID: \"55005e7b-f061-4065-8e7c-dd418b7fd072\") " pod="hostpath-provisioner/csi-hostpathplugin-gvjlq" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.697176 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70fb8e9d-6b1f-40ff-9e85-3ed28f99f5d5-cert\") pod \"ingress-canary-dz45b\" (UID: \"70fb8e9d-6b1f-40ff-9e85-3ed28f99f5d5\") " pod="openshift-ingress-canary/ingress-canary-dz45b" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.697240 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.697287 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/55005e7b-f061-4065-8e7c-dd418b7fd072-registration-dir\") pod \"csi-hostpathplugin-gvjlq\" (UID: \"55005e7b-f061-4065-8e7c-dd418b7fd072\") " pod="hostpath-provisioner/csi-hostpathplugin-gvjlq" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.697352 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-hlnsw" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.697362 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhh9m\" (UniqueName: \"kubernetes.io/projected/55005e7b-f061-4065-8e7c-dd418b7fd072-kube-api-access-fhh9m\") pod \"csi-hostpathplugin-gvjlq\" (UID: \"55005e7b-f061-4065-8e7c-dd418b7fd072\") " pod="hostpath-provisioner/csi-hostpathplugin-gvjlq" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.698061 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/55005e7b-f061-4065-8e7c-dd418b7fd072-plugins-dir\") pod \"csi-hostpathplugin-gvjlq\" (UID: \"55005e7b-f061-4065-8e7c-dd418b7fd072\") " pod="hostpath-provisioner/csi-hostpathplugin-gvjlq" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.698090 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wz5f\" (UniqueName: \"kubernetes.io/projected/2a401879-c671-46e3-a1ba-d6dbffb5ca5d-kube-api-access-5wz5f\") pod \"dns-default-jcw75\" (UID: \"2a401879-c671-46e3-a1ba-d6dbffb5ca5d\") " pod="openshift-dns/dns-default-jcw75" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.698117 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/55005e7b-f061-4065-8e7c-dd418b7fd072-socket-dir\") pod \"csi-hostpathplugin-gvjlq\" (UID: \"55005e7b-f061-4065-8e7c-dd418b7fd072\") " pod="hostpath-provisioner/csi-hostpathplugin-gvjlq" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.698147 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f9ht\" (UniqueName: \"kubernetes.io/projected/70fb8e9d-6b1f-40ff-9e85-3ed28f99f5d5-kube-api-access-5f9ht\") pod \"ingress-canary-dz45b\" (UID: \"70fb8e9d-6b1f-40ff-9e85-3ed28f99f5d5\") " pod="openshift-ingress-canary/ingress-canary-dz45b" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.698581 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/55005e7b-f061-4065-8e7c-dd418b7fd072-plugins-dir\") pod \"csi-hostpathplugin-gvjlq\" (UID: \"55005e7b-f061-4065-8e7c-dd418b7fd072\") " pod="hostpath-provisioner/csi-hostpathplugin-gvjlq" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.698704 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/55005e7b-f061-4065-8e7c-dd418b7fd072-socket-dir\") pod \"csi-hostpathplugin-gvjlq\" (UID: \"55005e7b-f061-4065-8e7c-dd418b7fd072\") " pod="hostpath-provisioner/csi-hostpathplugin-gvjlq" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.699514 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54ac7001-a300-4f21-b1dd-486db1c1e641-service-ca-bundle\") pod \"router-default-5444994796-ws4t8\" (UID: \"54ac7001-a300-4f21-b1dd-486db1c1e641\") " pod="openshift-ingress/router-default-5444994796-ws4t8" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.699580 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/55005e7b-f061-4065-8e7c-dd418b7fd072-mountpoint-dir\") pod \"csi-hostpathplugin-gvjlq\" (UID: \"55005e7b-f061-4065-8e7c-dd418b7fd072\") " pod="hostpath-provisioner/csi-hostpathplugin-gvjlq" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.700193 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a401879-c671-46e3-a1ba-d6dbffb5ca5d-config-volume\") pod \"dns-default-jcw75\" (UID: \"2a401879-c671-46e3-a1ba-d6dbffb5ca5d\") " pod="openshift-dns/dns-default-jcw75" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.700266 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/55005e7b-f061-4065-8e7c-dd418b7fd072-csi-data-dir\") pod \"csi-hostpathplugin-gvjlq\" (UID: \"55005e7b-f061-4065-8e7c-dd418b7fd072\") " pod="hostpath-provisioner/csi-hostpathplugin-gvjlq" Dec 05 19:06:09 crc kubenswrapper[4828]: E1205 19:06:09.701516 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:10.201499028 +0000 UTC m=+148.096721344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.701941 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/55005e7b-f061-4065-8e7c-dd418b7fd072-registration-dir\") pod \"csi-hostpathplugin-gvjlq\" (UID: \"55005e7b-f061-4065-8e7c-dd418b7fd072\") " pod="hostpath-provisioner/csi-hostpathplugin-gvjlq" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.707948 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2a401879-c671-46e3-a1ba-d6dbffb5ca5d-metrics-tls\") pod \"dns-default-jcw75\" (UID: \"2a401879-c671-46e3-a1ba-d6dbffb5ca5d\") " pod="openshift-dns/dns-default-jcw75" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.722621 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70fb8e9d-6b1f-40ff-9e85-3ed28f99f5d5-cert\") pod \"ingress-canary-dz45b\" (UID: \"70fb8e9d-6b1f-40ff-9e85-3ed28f99f5d5\") " pod="openshift-ingress-canary/ingress-canary-dz45b" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.723355 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbnvr\" (UniqueName: \"kubernetes.io/projected/2250ca28-5f36-4c7f-aca3-b71131272a51-kube-api-access-vbnvr\") pod \"kube-storage-version-migrator-operator-b67b599dd-mz8xv\" (UID: \"2250ca28-5f36-4c7f-aca3-b71131272a51\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mz8xv" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.724218 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5fck"] Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.727516 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq7jp\" (UniqueName: \"kubernetes.io/projected/ed1e0a7a-7a77-4343-8c33-e921e149ddab-kube-api-access-vq7jp\") pod \"marketplace-operator-79b997595-hcgdz\" (UID: \"ed1e0a7a-7a77-4343-8c33-e921e149ddab\") " pod="openshift-marketplace/marketplace-operator-79b997595-hcgdz" Dec 05 19:06:09 crc kubenswrapper[4828]: W1205 19:06:09.741117 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18690a90_a6e9_43ef_8550_e90caacb0d95.slice/crio-4331d27fce9fac63cdb33c7701091273f95fe9f8e40399f434cf93f34e54a05e WatchSource:0}: Error finding container 4331d27fce9fac63cdb33c7701091273f95fe9f8e40399f434cf93f34e54a05e: Status 404 returned error can't find the container with id 4331d27fce9fac63cdb33c7701091273f95fe9f8e40399f434cf93f34e54a05e Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.754152 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7866b3d9-f32e-4b75-bade-36891f08ae41-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vlwt4\" (UID: \"7866b3d9-f32e-4b75-bade-36891f08ae41\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vlwt4" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.754299 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/17e0a9b8-d746-4a17-a424-122b5c30ce75-bound-sa-token\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.799622 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:09 crc kubenswrapper[4828]: E1205 19:06:09.800122 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:10.300106836 +0000 UTC m=+148.195329142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.808641 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-qp52c" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.808944 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m957x"] Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.810944 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz6lz\" (UniqueName: \"kubernetes.io/projected/6ad1915a-9298-4aba-928b-5d3c7d57a7bb-kube-api-access-zz6lz\") pod \"control-plane-machine-set-operator-78cbb6b69f-kn6kp\" (UID: \"6ad1915a-9298-4aba-928b-5d3c7d57a7bb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kn6kp" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.823408 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqmj4\" (UniqueName: \"kubernetes.io/projected/d5d3db1a-56ec-426e-b14b-8be9a13c6347-kube-api-access-dqmj4\") pod \"catalog-operator-68c6474976-76wk9\" (UID: \"d5d3db1a-56ec-426e-b14b-8be9a13c6347\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76wk9" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.839720 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bnrq\" (UniqueName: \"kubernetes.io/projected/fa7ef61d-c907-4632-b3c7-e25c01f5d2b5-kube-api-access-5bnrq\") pod \"etcd-operator-b45778765-q8sfr\" (UID: \"fa7ef61d-c907-4632-b3c7-e25c01f5d2b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q8sfr" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.856956 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8glq\" (UniqueName: \"kubernetes.io/projected/0e55c6a6-4c4a-4548-ba89-a3a34c49124d-kube-api-access-r8glq\") pod \"machine-config-operator-74547568cd-bbvkt\" (UID: \"0e55c6a6-4c4a-4548-ba89-a3a34c49124d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bbvkt" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.862128 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mz8xv" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.878528 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f228cca-e005-4036-916e-10c7d1f9da1e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-sjlcb\" (UID: \"1f228cca-e005-4036-916e-10c7d1f9da1e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sjlcb" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.897691 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-q9sfv"] Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.903150 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:09 crc kubenswrapper[4828]: E1205 19:06:09.903467 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:10.403451818 +0000 UTC m=+148.298674124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.903551 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416020-s5gks" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.906549 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp2dx\" (UniqueName: \"kubernetes.io/projected/b8462a37-7879-45ab-91e6-29ae835c9771-kube-api-access-gp2dx\") pod \"packageserver-d55dfcdfc-l26k6\" (UID: \"b8462a37-7879-45ab-91e6-29ae835c9771\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l26k6" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.910639 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq99m"] Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.912807 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76wk9" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.919273 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hcgdz" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.925933 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5g4z\" (UniqueName: \"kubernetes.io/projected/b1061190-bb41-45c6-99f8-977e2dd1df5b-kube-api-access-k5g4z\") pod \"multus-admission-controller-857f4d67dd-9knxn\" (UID: \"b1061190-bb41-45c6-99f8-977e2dd1df5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9knxn" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.960037 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fjhr\" (UniqueName: \"kubernetes.io/projected/54ac7001-a300-4f21-b1dd-486db1c1e641-kube-api-access-4fjhr\") pod \"router-default-5444994796-ws4t8\" (UID: \"54ac7001-a300-4f21-b1dd-486db1c1e641\") " pod="openshift-ingress/router-default-5444994796-ws4t8" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.961512 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b6nk4"] Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.977731 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48zmv\" (UniqueName: \"kubernetes.io/projected/46c916dc-b6ce-4e56-9ef3-e15d778d7173-kube-api-access-48zmv\") pod \"dns-operator-744455d44c-9xcvs\" (UID: \"46c916dc-b6ce-4e56-9ef3-e15d778d7173\") " pod="openshift-dns-operator/dns-operator-744455d44c-9xcvs" Dec 05 19:06:09 crc kubenswrapper[4828]: I1205 19:06:09.990013 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqwdw\" (UniqueName: \"kubernetes.io/projected/6c46f04e-0d87-4198-8eca-6000b06409c0-kube-api-access-gqwdw\") pod \"migrator-59844c95c7-rpjqf\" (UID: \"6c46f04e-0d87-4198-8eca-6000b06409c0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rpjqf" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.004055 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vlwt4" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.004581 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.005042 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qcld\" (UniqueName: \"kubernetes.io/projected/6af58b85-a73a-4b78-b663-5e996f555e93-kube-api-access-2qcld\") pod \"package-server-manager-789f6589d5-gcfs9\" (UID: \"6af58b85-a73a-4b78-b663-5e996f555e93\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gcfs9" Dec 05 19:06:10 crc kubenswrapper[4828]: E1205 19:06:10.005211 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:10.505194479 +0000 UTC m=+148.400416785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.011149 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rpjqf" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.015060 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgb6g\" (UniqueName: \"kubernetes.io/projected/dd698da2-83bb-4051-b007-bbf97441a6b1-kube-api-access-vgb6g\") pod \"machine-config-controller-84d6567774-tphbb\" (UID: \"dd698da2-83bb-4051-b007-bbf97441a6b1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tphbb" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.022455 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-ws4t8" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.028121 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-q8sfr" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.035430 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kn6kp" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.039876 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9zv7\" (UniqueName: \"kubernetes.io/projected/15df38d4-e9b0-433c-8f33-5b5a9a14ca0f-kube-api-access-n9zv7\") pod \"service-ca-operator-777779d784-l9zft\" (UID: \"15df38d4-e9b0-433c-8f33-5b5a9a14ca0f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l9zft" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.056086 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzv5c\" (UniqueName: \"kubernetes.io/projected/c48b7dac-cabb-40fb-bb18-a587cd1a3184-kube-api-access-gzv5c\") pod \"olm-operator-6b444d44fb-twqzk\" (UID: \"c48b7dac-cabb-40fb-bb18-a587cd1a3184\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-twqzk" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.084388 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tphbb" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.085096 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dcsp\" (UniqueName: \"kubernetes.io/projected/bbb59932-429f-403e-8897-a4c7a778d7e2-kube-api-access-8dcsp\") pod \"machine-config-server-xk5tk\" (UID: \"bbb59932-429f-403e-8897-a4c7a778d7e2\") " pod="openshift-machine-config-operator/machine-config-server-xk5tk" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.095685 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9knxn" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.099806 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gcfs9" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.106612 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:10 crc kubenswrapper[4828]: E1205 19:06:10.107043 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:10.607027841 +0000 UTC m=+148.502250137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.112965 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14ac26da-2122-48f7-8e83-1acb41418490-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-k42q6\" (UID: \"14ac26da-2122-48f7-8e83-1acb41418490\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k42q6" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.116131 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-twqzk" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.117534 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnzn6\" (UniqueName: \"kubernetes.io/projected/12be153a-7f4d-4521-b4d9-def127e51cd5-kube-api-access-wnzn6\") pod \"ingress-operator-5b745b69d9-zqxhr\" (UID: \"12be153a-7f4d-4521-b4d9-def127e51cd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqxhr" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.125290 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l9zft" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.144353 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-9xcvs" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.144766 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sjlcb" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.155158 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bbvkt" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.162201 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhh9m\" (UniqueName: \"kubernetes.io/projected/55005e7b-f061-4065-8e7c-dd418b7fd072-kube-api-access-fhh9m\") pod \"csi-hostpathplugin-gvjlq\" (UID: \"55005e7b-f061-4065-8e7c-dd418b7fd072\") " pod="hostpath-provisioner/csi-hostpathplugin-gvjlq" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.168133 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f9ht\" (UniqueName: \"kubernetes.io/projected/70fb8e9d-6b1f-40ff-9e85-3ed28f99f5d5-kube-api-access-5f9ht\") pod \"ingress-canary-dz45b\" (UID: \"70fb8e9d-6b1f-40ff-9e85-3ed28f99f5d5\") " pod="openshift-ingress-canary/ingress-canary-dz45b" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.175935 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wz5f\" (UniqueName: \"kubernetes.io/projected/2a401879-c671-46e3-a1ba-d6dbffb5ca5d-kube-api-access-5wz5f\") pod \"dns-default-jcw75\" (UID: \"2a401879-c671-46e3-a1ba-d6dbffb5ca5d\") " pod="openshift-dns/dns-default-jcw75" Dec 05 19:06:10 crc kubenswrapper[4828]: W1205 19:06:10.183278 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5b5fb60_4709_4e6c_b9a6_ba869094f1e5.slice/crio-fee5ed0b23789fc8750897f7c70f2ed51cc9349009015f5a5271c368611155f3 WatchSource:0}: Error finding container fee5ed0b23789fc8750897f7c70f2ed51cc9349009015f5a5271c368611155f3: Status 404 returned error can't find the container with id fee5ed0b23789fc8750897f7c70f2ed51cc9349009015f5a5271c368611155f3 Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.184370 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqxhr" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.195293 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l26k6" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.207505 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:10 crc kubenswrapper[4828]: E1205 19:06:10.207732 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:10.707699034 +0000 UTC m=+148.602921340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.208213 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:10 crc kubenswrapper[4828]: E1205 19:06:10.208537 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:10.708526956 +0000 UTC m=+148.603749262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.223089 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-hlnsw"] Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.225781 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xdjfd"] Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.228457 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k42q6" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.231528 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-f24wr"] Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.234184 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-xk5tk" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.255514 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gvjlq" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.264061 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-dz45b" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.270324 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jcw75" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.293948 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rkdvk"] Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.310420 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:10 crc kubenswrapper[4828]: E1205 19:06:10.310770 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:10.810755879 +0000 UTC m=+148.705978185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:10 crc kubenswrapper[4828]: W1205 19:06:10.316111 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90285316_ecf0_4bd7_a9bf_72325863399c.slice/crio-dcf8c063eb341728c296a3b51b928e75a464ce8446c63e00b39afe801ee9bfad WatchSource:0}: Error finding container dcf8c063eb341728c296a3b51b928e75a464ce8446c63e00b39afe801ee9bfad: Status 404 returned error can't find the container with id dcf8c063eb341728c296a3b51b928e75a464ce8446c63e00b39afe801ee9bfad Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.416168 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:10 crc kubenswrapper[4828]: E1205 19:06:10.416607 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:10.916592656 +0000 UTC m=+148.811814962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.434309 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-m957x" event={"ID":"2cf96d2c-9865-437a-a87c-63ca051a421d","Type":"ContainerStarted","Data":"b768650f72009108bbc68a5e31152e9c73dad5ed5687d5a097a44cf6237dd18c"} Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.436073 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq99m" event={"ID":"24805259-8822-4373-aca0-68442e07b891","Type":"ContainerStarted","Data":"ef103a71d0622baf80abc78dc23b3675f117c7447aa8a5509a8e543a2f2a9604"} Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.492455 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xdjfd" event={"ID":"90285316-ecf0-4bd7-a9bf-72325863399c","Type":"ContainerStarted","Data":"dcf8c063eb341728c296a3b51b928e75a464ce8446c63e00b39afe801ee9bfad"} Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.492496 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-ws4t8" event={"ID":"54ac7001-a300-4f21-b1dd-486db1c1e641","Type":"ContainerStarted","Data":"2bb106eb92b484ec1f5cf9ab5d4fec1d737473be1b5bcc8a0fb90addd1977416"} Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.492509 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vbgcx" event={"ID":"e5365032-f31f-4e90-bb94-193e5d6dcc9f","Type":"ContainerStarted","Data":"07bfad868fe21598c3cc9de27c56ca1b20f546d40edcf81c2a7a1550ebb15327"} Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.492520 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vbgcx" event={"ID":"e5365032-f31f-4e90-bb94-193e5d6dcc9f","Type":"ContainerStarted","Data":"49dead24d3e8bf86b84336bb27a6da8122cbb747520e832bd89576401c80d939"} Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.492530 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zl9m" event={"ID":"c5e09f6d-a6ad-4ad7-afe3-b18d06c1fa48","Type":"ContainerStarted","Data":"80a832bd5ce07cde0ab26dedb0d68b8b738c7758bbcd6e89e42ac8902d616a00"} Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.493561 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" event={"ID":"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5","Type":"ContainerStarted","Data":"fee5ed0b23789fc8750897f7c70f2ed51cc9349009015f5a5271c368611155f3"} Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.501326 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-b6wdx" event={"ID":"18690a90-a6e9-43ef-8550-e90caacb0d95","Type":"ContainerStarted","Data":"ee68bed9f597653d7f6f9ec9cb0f7aa48a9b0b7c6f21a2f94d4ac72aa9428669"} Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.501364 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-b6wdx" event={"ID":"18690a90-a6e9-43ef-8550-e90caacb0d95","Type":"ContainerStarted","Data":"4331d27fce9fac63cdb33c7701091273f95fe9f8e40399f434cf93f34e54a05e"} Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.502872 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5fck" event={"ID":"67625800-270c-4a66-95d0-a2853c23c26f","Type":"ContainerStarted","Data":"f15a86019fa9d440f44587682fa6f6555d534da86ac2230a8a3875ce016e6b1c"} Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.504288 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" event={"ID":"724b0be1-4d6c-4b96-933d-d94b8f146bd8","Type":"ContainerStarted","Data":"41fd8885454da75de250d5aa9162ccb1b18ce078fcee06597b4977ddbad30414"} Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.506441 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fn8zx" event={"ID":"cfae7bfb-b250-48de-ad6b-e741405f07c3","Type":"ContainerStarted","Data":"8f4aff506cadaf7a0c8059fc653aaeb24649e1589295efef1b126ab9483d91e4"} Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.506478 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fn8zx" event={"ID":"cfae7bfb-b250-48de-ad6b-e741405f07c3","Type":"ContainerStarted","Data":"74980ceae36f56816e35061f9e5ea02b1cebb0fc05633ccc08fe4bc7fc774045"} Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.507616 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-q9sfv" event={"ID":"4f8576a7-5291-4b1f-a06c-35395fa9c9dd","Type":"ContainerStarted","Data":"e09842cab43510e669c6b30aa68d7430956fbe0543b77459cc8b5bac63a4134a"} Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.509087 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-hlnsw" event={"ID":"b8b608ed-2cef-43a4-8b6e-70efed829e65","Type":"ContainerStarted","Data":"8eb5295e959aa4aa88dc22ded940a2f034e2b9f0393f8a5ee73f8efb003bc074"} Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.511023 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gd82" event={"ID":"3e8d4020-b5ce-4b96-ba2a-ce605a9a514b","Type":"ContainerStarted","Data":"af85bf4a7b98fbb4b0d27eca950fa2fe15e8027fc40921d3cde2c4b5a7aa18b8"} Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.511070 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gd82" event={"ID":"3e8d4020-b5ce-4b96-ba2a-ce605a9a514b","Type":"ContainerStarted","Data":"8da360b769851add7d91a56f40242eecd0707958ad3b68435949366ef3a36b16"} Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.517056 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:10 crc kubenswrapper[4828]: E1205 19:06:10.517437 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:11.017373492 +0000 UTC m=+148.912595848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.533503 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.619463 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:10 crc kubenswrapper[4828]: E1205 19:06:10.620992 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:11.120807047 +0000 UTC m=+149.016029353 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.655782 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mz8xv"] Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.657714 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-qp52c"] Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.680013 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76wk9"] Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.705051 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hcgdz"] Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.721209 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:10 crc kubenswrapper[4828]: E1205 19:06:10.721389 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:11.221368256 +0000 UTC m=+149.116590562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.722189 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.723303 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416020-s5gks"] Dec 05 19:06:10 crc kubenswrapper[4828]: E1205 19:06:10.724934 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:11.22491882 +0000 UTC m=+149.120141126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.830676 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:10 crc kubenswrapper[4828]: E1205 19:06:10.831059 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:11.331043624 +0000 UTC m=+149.226265930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.883923 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx" podStartSLOduration=128.883903697 podStartE2EDuration="2m8.883903697s" podCreationTimestamp="2025-12-05 19:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:10.854918018 +0000 UTC m=+148.750140344" watchObservedRunningTime="2025-12-05 19:06:10.883903697 +0000 UTC m=+148.779126013" Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.888157 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gcfs9"] Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.932474 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:10 crc kubenswrapper[4828]: E1205 19:06:10.932792 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:11.432781405 +0000 UTC m=+149.328003711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.975157 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9knxn"] Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.979470 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rpjqf"] Dec 05 19:06:10 crc kubenswrapper[4828]: I1205 19:06:10.987218 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-q8sfr"] Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.037202 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:11 crc kubenswrapper[4828]: E1205 19:06:11.037646 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:11.537626887 +0000 UTC m=+149.432849193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.146135 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:11 crc kubenswrapper[4828]: E1205 19:06:11.146482 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:11.646471013 +0000 UTC m=+149.541693319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.252816 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-l9zft"] Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.253536 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vlwt4"] Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.253595 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:11 crc kubenswrapper[4828]: E1205 19:06:11.255045 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:11.755022701 +0000 UTC m=+149.650245007 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.359487 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:11 crc kubenswrapper[4828]: E1205 19:06:11.359847 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:11.859814782 +0000 UTC m=+149.755037088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.462541 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:11 crc kubenswrapper[4828]: E1205 19:06:11.462701 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:11.962673021 +0000 UTC m=+149.857895327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.463031 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:11 crc kubenswrapper[4828]: E1205 19:06:11.463340 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:11.963328468 +0000 UTC m=+149.858550774 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.518870 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hcgdz" event={"ID":"ed1e0a7a-7a77-4343-8c33-e921e149ddab","Type":"ContainerStarted","Data":"dc272372795e4302881905616aa3611e0adeb909667fa0bdf2b74eb29cc74840"} Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.519738 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mz8xv" event={"ID":"2250ca28-5f36-4c7f-aca3-b71131272a51","Type":"ContainerStarted","Data":"38e07c9ce95a12e96507700f52ad4da4fd73920d3bfad05b2ad2a43b5b651410"} Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.520760 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5fck" event={"ID":"67625800-270c-4a66-95d0-a2853c23c26f","Type":"ContainerStarted","Data":"2fcedd5e4cd40cc6eb79946d50990a17def41ac83afc037bafc4b0bde80e99ac"} Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.525883 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fn8zx" event={"ID":"cfae7bfb-b250-48de-ad6b-e741405f07c3","Type":"ContainerStarted","Data":"fc2882db738f0addcfbc787de5e60428bde96e629f9b9290ae26720af1c56ffe"} Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.540241 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq99m" event={"ID":"24805259-8822-4373-aca0-68442e07b891","Type":"ContainerStarted","Data":"59a26fa75223c236296d93c4df36e5584657fd8d9c62c1415585277b85900f9a"} Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.550318 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-ws4t8" event={"ID":"54ac7001-a300-4f21-b1dd-486db1c1e641","Type":"ContainerStarted","Data":"c84ba49d4713aed75f2e0e0faa38348e818884f474d4130c23e1a126f02dbeb4"} Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.564196 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:11 crc kubenswrapper[4828]: E1205 19:06:11.564626 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:12.064609797 +0000 UTC m=+149.959832103 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.564761 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-m957x" event={"ID":"2cf96d2c-9865-437a-a87c-63ca051a421d","Type":"ContainerStarted","Data":"20b7986cd9a4d8641eec30e9c5edcf628497e3657741e74ea63fdcd3a099fe82"} Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.565096 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-m957x" Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.566662 4828 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-m957x container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.566708 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-m957x" podUID="2cf96d2c-9865-437a-a87c-63ca051a421d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.571682 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gcfs9" event={"ID":"6af58b85-a73a-4b78-b663-5e996f555e93","Type":"ContainerStarted","Data":"03a336fa402fb8b9fa3f6491922048865f013517e10b40035fabd66cc02e10e5"} Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.575224 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-q8sfr" event={"ID":"fa7ef61d-c907-4632-b3c7-e25c01f5d2b5","Type":"ContainerStarted","Data":"998c430bc65c18c8f90857627dbef9fb670c387cda39a3df509e5b6206e5c529"} Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.585644 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76wk9" event={"ID":"d5d3db1a-56ec-426e-b14b-8be9a13c6347","Type":"ContainerStarted","Data":"f6e96a8fad116164d986a9f7c1bd0fc5bf8680d95accf463f625b5455849039c"} Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.592959 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-qp52c" event={"ID":"8c8aaf1e-1192-4552-9de6-614d8e325b7b","Type":"ContainerStarted","Data":"b82d1d610ec88ac4b9c6138fd650156940dde98553e7ae1b46114aedf43f3ef5"} Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.593014 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-qp52c" event={"ID":"8c8aaf1e-1192-4552-9de6-614d8e325b7b","Type":"ContainerStarted","Data":"c166ed324013e3bf45917a1b1a57b61052a30e3f246783b293a7b7c8b459e2ec"} Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.600355 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-hlnsw" event={"ID":"b8b608ed-2cef-43a4-8b6e-70efed829e65","Type":"ContainerStarted","Data":"75ab42b8b56bb88a774358139b10da8bdcd880cc2cba7e495dd96e6312cd545f"} Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.600400 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-hlnsw" Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.601285 4828 patch_prober.go:28] interesting pod/console-operator-58897d9998-hlnsw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/readyz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.601327 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-hlnsw" podUID="b8b608ed-2cef-43a4-8b6e-70efed829e65" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/readyz\": dial tcp 10.217.0.32:8443: connect: connection refused" Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.610248 4828 generic.go:334] "Generic (PLEG): container finished" podID="724b0be1-4d6c-4b96-933d-d94b8f146bd8" containerID="9db36e032c2b6a218f0cf38737cb450596ac370be21f56337928653e068b0c90" exitCode=0 Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.610348 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" event={"ID":"724b0be1-4d6c-4b96-933d-d94b8f146bd8","Type":"ContainerDied","Data":"9db36e032c2b6a218f0cf38737cb450596ac370be21f56337928653e068b0c90"} Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.626580 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-f24wr" event={"ID":"dd460169-7ac7-48de-95a0-4c8ec9fd2d31","Type":"ContainerStarted","Data":"d9e243cee45f7df78fc8910cab970bc9375c22ddcffad5679344b36b7971cedd"} Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.626621 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-f24wr" event={"ID":"dd460169-7ac7-48de-95a0-4c8ec9fd2d31","Type":"ContainerStarted","Data":"5c22dc00c83165c54840ea0544e3b97fc089ae4e7186086ac437f754ca196d7e"} Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.627493 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-f24wr" Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.638783 4828 patch_prober.go:28] interesting pod/downloads-7954f5f757-f24wr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.638875 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f24wr" podUID="dd460169-7ac7-48de-95a0-4c8ec9fd2d31" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.641436 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vbgcx" event={"ID":"e5365032-f31f-4e90-bb94-193e5d6dcc9f","Type":"ContainerStarted","Data":"838e9a01236afca01132c12a26ecdea5db80c9419ae543a1aca2dab6c97e4df5"} Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.656613 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-xk5tk" event={"ID":"bbb59932-429f-403e-8897-a4c7a778d7e2","Type":"ContainerStarted","Data":"b4b016f802b2932a59c84963d26b2629eed1cd4b10049a56da1f7cb8b957d248"} Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.656648 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-xk5tk" event={"ID":"bbb59932-429f-403e-8897-a4c7a778d7e2","Type":"ContainerStarted","Data":"300633b8163722d0cf8d19ff5580f6de948183558bbe8cd68bd029df49adf4d6"} Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.671992 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:11 crc kubenswrapper[4828]: E1205 19:06:11.673569 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:12.173554556 +0000 UTC m=+150.068776942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.706807 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rpjqf" event={"ID":"6c46f04e-0d87-4198-8eca-6000b06409c0","Type":"ContainerStarted","Data":"f18651b5fb3595a59e6e023bef0f6cd8a84905502c0296aa3df1b751ff244311"} Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.729296 4828 generic.go:334] "Generic (PLEG): container finished" podID="3e8d4020-b5ce-4b96-ba2a-ce605a9a514b" containerID="af85bf4a7b98fbb4b0d27eca950fa2fe15e8027fc40921d3cde2c4b5a7aa18b8" exitCode=0 Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.730140 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gd82" event={"ID":"3e8d4020-b5ce-4b96-ba2a-ce605a9a514b","Type":"ContainerDied","Data":"af85bf4a7b98fbb4b0d27eca950fa2fe15e8027fc40921d3cde2c4b5a7aa18b8"} Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.745595 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-b6wdx" podStartSLOduration=130.744793739 podStartE2EDuration="2m10.744793739s" podCreationTimestamp="2025-12-05 19:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:11.735338811 +0000 UTC m=+149.630561147" watchObservedRunningTime="2025-12-05 19:06:11.744793739 +0000 UTC m=+149.640016055" Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.748006 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" event={"ID":"ffe87ead-c1e1-4126-8c85-3054648d6990","Type":"ContainerStarted","Data":"cb7c783750c5cc01f4daa008dce244dcfd803bb1a4aa6bdb8be806f07352b3f7"} Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.752454 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-q9sfv" event={"ID":"4f8576a7-5291-4b1f-a06c-35395fa9c9dd","Type":"ContainerStarted","Data":"eab0d133a19f7ed97abb6f5f5241d7de7a6b937642bf5c6eb02d6574f6b0ab84"} Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.773231 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:11 crc kubenswrapper[4828]: E1205 19:06:11.776476 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:12.276460157 +0000 UTC m=+150.171682463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.787678 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xdjfd" event={"ID":"90285316-ecf0-4bd7-a9bf-72325863399c","Type":"ContainerStarted","Data":"eac9c5be5c82cda1c5b2df734f6b28f46ae443ae9aa672c8ec2a04b855f285c2"} Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.794598 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zl9m" event={"ID":"c5e09f6d-a6ad-4ad7-afe3-b18d06c1fa48","Type":"ContainerStarted","Data":"5932ad0fa42b867eec5d3f568ee776ab5eca355be3561bf46f7a80d7caef1f79"} Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.826609 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-9xcvs"] Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.832379 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9knxn" event={"ID":"b1061190-bb41-45c6-99f8-977e2dd1df5b","Type":"ContainerStarted","Data":"ebfa4b6f416156e4db29166a6f92757940c37abe142fe5c0e52a858361f329a5"} Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.841545 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416020-s5gks" event={"ID":"b5ed41b4-64e6-407a-b3a5-104f2b97b008","Type":"ContainerStarted","Data":"67b1bd38b1ae4d8d074f6b933aa07e5264ea22dfa0676cdd6c66ee0866601c24"} Dec 05 19:06:11 crc kubenswrapper[4828]: W1205 19:06:11.865549 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46c916dc_b6ce_4e56_9ef3_e15d778d7173.slice/crio-3fb49fe8a0e15b6be134f95f60bb390df89029106ba738db6fdcf0ec082d7fdc WatchSource:0}: Error finding container 3fb49fe8a0e15b6be134f95f60bb390df89029106ba738db6fdcf0ec082d7fdc: Status 404 returned error can't find the container with id 3fb49fe8a0e15b6be134f95f60bb390df89029106ba738db6fdcf0ec082d7fdc Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.877293 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:11 crc kubenswrapper[4828]: E1205 19:06:11.879259 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:12.379242624 +0000 UTC m=+150.274464930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.906673 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kn6kp"] Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.942535 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-vbgcx" podStartSLOduration=129.942517849 podStartE2EDuration="2m9.942517849s" podCreationTimestamp="2025-12-05 19:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:11.889644076 +0000 UTC m=+149.784866382" watchObservedRunningTime="2025-12-05 19:06:11.942517849 +0000 UTC m=+149.837740155" Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.972539 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-xk5tk" podStartSLOduration=4.972522774 podStartE2EDuration="4.972522774s" podCreationTimestamp="2025-12-05 19:06:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:11.935264829 +0000 UTC m=+149.830487135" watchObservedRunningTime="2025-12-05 19:06:11.972522774 +0000 UTC m=+149.867745080" Dec 05 19:06:11 crc kubenswrapper[4828]: I1205 19:06:11.983013 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:11 crc kubenswrapper[4828]: E1205 19:06:11.984401 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:12.484382024 +0000 UTC m=+150.379604340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.024104 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-ws4t8" Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.031089 4828 patch_prober.go:28] interesting pod/router-default-5444994796-ws4t8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 19:06:12 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Dec 05 19:06:12 crc kubenswrapper[4828]: [+]process-running ok Dec 05 19:06:12 crc kubenswrapper[4828]: healthz check failed Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.031168 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ws4t8" podUID="54ac7001-a300-4f21-b1dd-486db1c1e641" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.034617 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k42q6"] Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.050184 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zl9m" podStartSLOduration=131.050166754 podStartE2EDuration="2m11.050166754s" podCreationTimestamp="2025-12-05 19:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:12.048810248 +0000 UTC m=+149.944032554" watchObservedRunningTime="2025-12-05 19:06:12.050166754 +0000 UTC m=+149.945389060" Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.064724 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l26k6"] Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.084792 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:12 crc kubenswrapper[4828]: E1205 19:06:12.085161 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:12.585149959 +0000 UTC m=+150.480372265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.086099 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gvjlq"] Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.099890 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zqxhr"] Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.099938 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-twqzk"] Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.149580 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jcw75"] Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.151161 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-m957x" podStartSLOduration=130.151144484 podStartE2EDuration="2m10.151144484s" podCreationTimestamp="2025-12-05 19:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:12.105931652 +0000 UTC m=+150.001153978" watchObservedRunningTime="2025-12-05 19:06:12.151144484 +0000 UTC m=+150.046366790" Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.151706 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-ws4t8" podStartSLOduration=130.15170208 podStartE2EDuration="2m10.15170208s" podCreationTimestamp="2025-12-05 19:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:12.150934169 +0000 UTC m=+150.046156475" watchObservedRunningTime="2025-12-05 19:06:12.15170208 +0000 UTC m=+150.046924386" Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.172193 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tphbb"] Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.190469 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:12 crc kubenswrapper[4828]: E1205 19:06:12.190790 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:12.690774541 +0000 UTC m=+150.585996847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.237099 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5fck" podStartSLOduration=131.237083622 podStartE2EDuration="2m11.237083622s" podCreationTimestamp="2025-12-05 19:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:12.197207799 +0000 UTC m=+150.092430105" watchObservedRunningTime="2025-12-05 19:06:12.237083622 +0000 UTC m=+150.132305928" Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.237688 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fn8zx" podStartSLOduration=131.237681397 podStartE2EDuration="2m11.237681397s" podCreationTimestamp="2025-12-05 19:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:12.235045358 +0000 UTC m=+150.130267664" watchObservedRunningTime="2025-12-05 19:06:12.237681397 +0000 UTC m=+150.132903703" Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.247067 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sjlcb"] Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.279795 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xdjfd" podStartSLOduration=131.279780278 podStartE2EDuration="2m11.279780278s" podCreationTimestamp="2025-12-05 19:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:12.276908833 +0000 UTC m=+150.172131139" watchObservedRunningTime="2025-12-05 19:06:12.279780278 +0000 UTC m=+150.175002574" Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.281411 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bbvkt"] Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.292475 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:12 crc kubenswrapper[4828]: E1205 19:06:12.292723 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:12.792710476 +0000 UTC m=+150.687932782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.294414 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-f24wr" podStartSLOduration=131.294404171 podStartE2EDuration="2m11.294404171s" podCreationTimestamp="2025-12-05 19:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:12.292814589 +0000 UTC m=+150.188036895" watchObservedRunningTime="2025-12-05 19:06:12.294404171 +0000 UTC m=+150.189626467" Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.320917 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-dz45b"] Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.345017 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq99m" podStartSLOduration=131.345000634 podStartE2EDuration="2m11.345000634s" podCreationTimestamp="2025-12-05 19:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:12.342384926 +0000 UTC m=+150.237607232" watchObservedRunningTime="2025-12-05 19:06:12.345000634 +0000 UTC m=+150.240222940" Dec 05 19:06:12 crc kubenswrapper[4828]: W1205 19:06:12.354655 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e55c6a6_4c4a_4548_ba89_a3a34c49124d.slice/crio-72e264fd6d15f260a27e0cc8c411475e874b73189b7848702677a304c0589a60 WatchSource:0}: Error finding container 72e264fd6d15f260a27e0cc8c411475e874b73189b7848702677a304c0589a60: Status 404 returned error can't find the container with id 72e264fd6d15f260a27e0cc8c411475e874b73189b7848702677a304c0589a60 Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.396431 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:12 crc kubenswrapper[4828]: E1205 19:06:12.396804 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:12.896789108 +0000 UTC m=+150.792011414 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.399729 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-qp52c" podStartSLOduration=130.399707585 podStartE2EDuration="2m10.399707585s" podCreationTimestamp="2025-12-05 19:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:12.396609884 +0000 UTC m=+150.291832190" watchObservedRunningTime="2025-12-05 19:06:12.399707585 +0000 UTC m=+150.294929891" Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.461763 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-q9sfv" podStartSLOduration=131.461747927 podStartE2EDuration="2m11.461747927s" podCreationTimestamp="2025-12-05 19:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:12.42823905 +0000 UTC m=+150.323461356" watchObservedRunningTime="2025-12-05 19:06:12.461747927 +0000 UTC m=+150.356970233" Dec 05 19:06:12 crc kubenswrapper[4828]: W1205 19:06:12.473946 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70fb8e9d_6b1f_40ff_9e85_3ed28f99f5d5.slice/crio-951cfeb2e9a2a15b4eebd2855a07374caaf9facc8f3a4e1fa2f748e8ff37adb4 WatchSource:0}: Error finding container 951cfeb2e9a2a15b4eebd2855a07374caaf9facc8f3a4e1fa2f748e8ff37adb4: Status 404 returned error can't find the container with id 951cfeb2e9a2a15b4eebd2855a07374caaf9facc8f3a4e1fa2f748e8ff37adb4 Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.504600 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:12 crc kubenswrapper[4828]: E1205 19:06:12.504998 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:13.004983787 +0000 UTC m=+150.900206083 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.536463 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-hlnsw" podStartSLOduration=131.5364415 podStartE2EDuration="2m11.5364415s" podCreationTimestamp="2025-12-05 19:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:12.462762644 +0000 UTC m=+150.357984950" watchObservedRunningTime="2025-12-05 19:06:12.5364415 +0000 UTC m=+150.431663806" Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.606380 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:12 crc kubenswrapper[4828]: E1205 19:06:12.606903 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:13.106889672 +0000 UTC m=+151.002111978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.709501 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:12 crc kubenswrapper[4828]: E1205 19:06:12.709755 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:13.209743462 +0000 UTC m=+151.104965758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.814344 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:12 crc kubenswrapper[4828]: E1205 19:06:12.814666 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:13.314651985 +0000 UTC m=+151.209874291 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.874868 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bbvkt" event={"ID":"0e55c6a6-4c4a-4548-ba89-a3a34c49124d","Type":"ContainerStarted","Data":"72e264fd6d15f260a27e0cc8c411475e874b73189b7848702677a304c0589a60"} Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.905104 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416020-s5gks" event={"ID":"b5ed41b4-64e6-407a-b3a5-104f2b97b008","Type":"ContainerStarted","Data":"9802d3655f2ed9c9a92b4650dee78a473b5b30178c1546757deaa1bf9b8f1f6b"} Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.915734 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gvjlq" event={"ID":"55005e7b-f061-4065-8e7c-dd418b7fd072","Type":"ContainerStarted","Data":"04d65d5005418c4fad494ed89801d5db4cd79af31fbd74582ab7aa28821e2bb5"} Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.917126 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tphbb" event={"ID":"dd698da2-83bb-4051-b007-bbf97441a6b1","Type":"ContainerStarted","Data":"7fc7b6e7c978cc7344e7374ecd14ecdcc67f3c5924cf39a89b3dd662ce141f7a"} Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.918549 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:12 crc kubenswrapper[4828]: E1205 19:06:12.919029 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:13.419015145 +0000 UTC m=+151.314237451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.934661 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-q8sfr" event={"ID":"fa7ef61d-c907-4632-b3c7-e25c01f5d2b5","Type":"ContainerStarted","Data":"e15909d9325f010547cac44aefeffefaa12993d552b38707bee44e5201c4aec3"} Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.937412 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-9xcvs" event={"ID":"46c916dc-b6ce-4e56-9ef3-e15d778d7173","Type":"ContainerStarted","Data":"3fb49fe8a0e15b6be134f95f60bb390df89029106ba738db6fdcf0ec082d7fdc"} Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.996167 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gd82" event={"ID":"3e8d4020-b5ce-4b96-ba2a-ce605a9a514b","Type":"ContainerStarted","Data":"b118a5f3483a4463f2670d1559f261e4400c34b9a002ed877e36bb8ceaac49a8"} Dec 05 19:06:12 crc kubenswrapper[4828]: I1205 19:06:12.998092 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gd82" Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.001929 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vlwt4" event={"ID":"7866b3d9-f32e-4b75-bade-36891f08ae41","Type":"ContainerStarted","Data":"c766630981556344de8cb2b2e53cd24751deea40d3deb6c253a2ee881edcf439"} Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.002034 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vlwt4" event={"ID":"7866b3d9-f32e-4b75-bade-36891f08ae41","Type":"ContainerStarted","Data":"23198eae567f9174c65cf004897f2d78a815fa3cfb8f2392dc171f625551c015"} Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.019317 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.019955 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kn6kp" event={"ID":"6ad1915a-9298-4aba-928b-5d3c7d57a7bb","Type":"ContainerStarted","Data":"b1f9bf4cdd24bd72b1e36b315a7d3932db237873a64e52adc0fabef5f311ea19"} Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.020003 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kn6kp" event={"ID":"6ad1915a-9298-4aba-928b-5d3c7d57a7bb","Type":"ContainerStarted","Data":"7292b5b181c211adba5d0a5817f1e27140f1d3e9407ce6a1060cb7a4af53d2b7"} Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.025542 4828 patch_prober.go:28] interesting pod/router-default-5444994796-ws4t8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 19:06:13 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Dec 05 19:06:13 crc kubenswrapper[4828]: [+]process-running ok Dec 05 19:06:13 crc kubenswrapper[4828]: healthz check failed Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.025795 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ws4t8" podUID="54ac7001-a300-4f21-b1dd-486db1c1e641" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.026211 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-twqzk" event={"ID":"c48b7dac-cabb-40fb-bb18-a587cd1a3184","Type":"ContainerStarted","Data":"56a9cf9d40817bb71c171830b1f4c50e00bdf65b163478ccac666f73716c963f"} Dec 05 19:06:13 crc kubenswrapper[4828]: E1205 19:06:13.029148 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:13.520799665 +0000 UTC m=+151.416022001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.040558 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gcfs9" event={"ID":"6af58b85-a73a-4b78-b663-5e996f555e93","Type":"ContainerStarted","Data":"9cd0bccaa599272fe3a47026aa98f26288da01a85593fea0a3db6443eac76ff1"} Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.044003 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" event={"ID":"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5","Type":"ContainerStarted","Data":"180cd1b0b58dc0c4a32f9425bcf73527864c84ce4a12f0458a9d6816e4160bf5"} Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.045135 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.047728 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9knxn" event={"ID":"b1061190-bb41-45c6-99f8-977e2dd1df5b","Type":"ContainerStarted","Data":"f18e0096a0730f96c10afd4ed23df367d817d3e2f725c3bbdd43e9f7ce334688"} Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.055615 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76wk9" event={"ID":"d5d3db1a-56ec-426e-b14b-8be9a13c6347","Type":"ContainerStarted","Data":"8e6e675367f7270cb7599907061f3acbba6689a39a15d6bdeebc8ef75b4eb23b"} Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.056568 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76wk9" Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.058838 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-dz45b" event={"ID":"70fb8e9d-6b1f-40ff-9e85-3ed28f99f5d5","Type":"ContainerStarted","Data":"951cfeb2e9a2a15b4eebd2855a07374caaf9facc8f3a4e1fa2f748e8ff37adb4"} Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.060280 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hcgdz" event={"ID":"ed1e0a7a-7a77-4343-8c33-e921e149ddab","Type":"ContainerStarted","Data":"873f186cc202eaabfdbf10f25cf53bb03fa939649f9f945fe93e179d4b3f7283"} Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.060937 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hcgdz" Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.068166 4828 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hcgdz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.068206 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hcgdz" podUID="ed1e0a7a-7a77-4343-8c33-e921e149ddab" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.080305 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k42q6" event={"ID":"14ac26da-2122-48f7-8e83-1acb41418490","Type":"ContainerStarted","Data":"20f5229129ae9153951c6dc84793205b3b2dd18ea03ca3d47e83d906da0dee5f"} Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.122207 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:13 crc kubenswrapper[4828]: E1205 19:06:13.124028 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:13.624013155 +0000 UTC m=+151.519235461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.180307 4828 generic.go:334] "Generic (PLEG): container finished" podID="ffe87ead-c1e1-4126-8c85-3054648d6990" containerID="8b445302bd71493b768580f5e740e4fdcc745aa7639e9824e9ea46cb9ad2f49c" exitCode=0 Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.180654 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" event={"ID":"ffe87ead-c1e1-4126-8c85-3054648d6990","Type":"ContainerDied","Data":"8b445302bd71493b768580f5e740e4fdcc745aa7639e9824e9ea46cb9ad2f49c"} Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.186181 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mz8xv" event={"ID":"2250ca28-5f36-4c7f-aca3-b71131272a51","Type":"ContainerStarted","Data":"110451ac137cc36885074acb3e7fbc978aeaf883ce2041c5513009d3fcc8d967"} Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.192362 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" podStartSLOduration=132.192343532 podStartE2EDuration="2m12.192343532s" podCreationTimestamp="2025-12-05 19:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:13.150379684 +0000 UTC m=+151.045601990" watchObservedRunningTime="2025-12-05 19:06:13.192343532 +0000 UTC m=+151.087565838" Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.192505 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29416020-s5gks" podStartSLOduration=132.192499426 podStartE2EDuration="2m12.192499426s" podCreationTimestamp="2025-12-05 19:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:13.185292687 +0000 UTC m=+151.080515013" watchObservedRunningTime="2025-12-05 19:06:13.192499426 +0000 UTC m=+151.087721722" Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.196009 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l26k6" event={"ID":"b8462a37-7879-45ab-91e6-29ae835c9771","Type":"ContainerStarted","Data":"983b4f96912de4fc40b0742d81896ee271e63c6f73adb76b7e17c5c9d162fece"} Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.200302 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sjlcb" event={"ID":"1f228cca-e005-4036-916e-10c7d1f9da1e","Type":"ContainerStarted","Data":"efd4b1dc8378cddf61beaf182249033adaefbba21481c00d5968ec9ce7c54f51"} Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.209215 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jcw75" event={"ID":"2a401879-c671-46e3-a1ba-d6dbffb5ca5d","Type":"ContainerStarted","Data":"e4264cbf42520e627bf40d071b826ef138f11122df7c4307cdff3106b0873070"} Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.211236 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l9zft" event={"ID":"15df38d4-e9b0-433c-8f33-5b5a9a14ca0f","Type":"ContainerStarted","Data":"66aa7650732d1d5889ff05ef3099bdc74c7b4345abe78fb043e01342ad3f76ce"} Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.211268 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l9zft" event={"ID":"15df38d4-e9b0-433c-8f33-5b5a9a14ca0f","Type":"ContainerStarted","Data":"c0582dc5272dd114d447439e68bd1e8b3f6d2721d12108e860dc2f56024f60c1"} Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.219393 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqxhr" event={"ID":"12be153a-7f4d-4521-b4d9-def127e51cd5","Type":"ContainerStarted","Data":"fd01db43d26ba9969c907e351327fe420eb3afa3bf6820224bd15bceeb7c3713"} Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.221186 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rpjqf" event={"ID":"6c46f04e-0d87-4198-8eca-6000b06409c0","Type":"ContainerStarted","Data":"e5e3b946c12c3c34ec07ed05c5a35693156cfae91b8fe7b585694e8e960c65e0"} Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.224271 4828 patch_prober.go:28] interesting pod/downloads-7954f5f757-f24wr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.224322 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f24wr" podUID="dd460169-7ac7-48de-95a0-4c8ec9fd2d31" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.224704 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76wk9" Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.225524 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:13 crc kubenswrapper[4828]: E1205 19:06:13.225633 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:13.725615132 +0000 UTC m=+151.620837438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.238024 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:13 crc kubenswrapper[4828]: E1205 19:06:13.239289 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:13.739268928 +0000 UTC m=+151.634491234 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.258645 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-m957x" Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.264169 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kn6kp" podStartSLOduration=131.264149829 podStartE2EDuration="2m11.264149829s" podCreationTimestamp="2025-12-05 19:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:13.259295903 +0000 UTC m=+151.154518209" watchObservedRunningTime="2025-12-05 19:06:13.264149829 +0000 UTC m=+151.159372135" Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.310718 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gd82" podStartSLOduration=132.310702656 podStartE2EDuration="2m12.310702656s" podCreationTimestamp="2025-12-05 19:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:13.310351177 +0000 UTC m=+151.205573483" watchObservedRunningTime="2025-12-05 19:06:13.310702656 +0000 UTC m=+151.205924962" Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.346083 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:13 crc kubenswrapper[4828]: E1205 19:06:13.347807 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:13.847783236 +0000 UTC m=+151.743005572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.353816 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-q8sfr" podStartSLOduration=131.353801134 podStartE2EDuration="2m11.353801134s" podCreationTimestamp="2025-12-05 19:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:13.351577936 +0000 UTC m=+151.246800242" watchObservedRunningTime="2025-12-05 19:06:13.353801134 +0000 UTC m=+151.249023440" Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.441571 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vlwt4" podStartSLOduration=131.441557009 podStartE2EDuration="2m11.441557009s" podCreationTimestamp="2025-12-05 19:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:13.390912354 +0000 UTC m=+151.286134650" watchObservedRunningTime="2025-12-05 19:06:13.441557009 +0000 UTC m=+151.336779315" Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.452183 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:13 crc kubenswrapper[4828]: E1205 19:06:13.452532 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:13.952518895 +0000 UTC m=+151.847741201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.481048 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hcgdz" podStartSLOduration=131.481028111 podStartE2EDuration="2m11.481028111s" podCreationTimestamp="2025-12-05 19:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:13.478076453 +0000 UTC m=+151.373298779" watchObservedRunningTime="2025-12-05 19:06:13.481028111 +0000 UTC m=+151.376250417" Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.482138 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76wk9" podStartSLOduration=131.4821316 podStartE2EDuration="2m11.4821316s" podCreationTimestamp="2025-12-05 19:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:13.44313479 +0000 UTC m=+151.338357096" watchObservedRunningTime="2025-12-05 19:06:13.4821316 +0000 UTC m=+151.377353906" Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.532861 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l9zft" podStartSLOduration=131.532837815 podStartE2EDuration="2m11.532837815s" podCreationTimestamp="2025-12-05 19:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:13.520292667 +0000 UTC m=+151.415514973" watchObservedRunningTime="2025-12-05 19:06:13.532837815 +0000 UTC m=+151.428060121" Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.558195 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:13 crc kubenswrapper[4828]: E1205 19:06:13.558314 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:14.058290341 +0000 UTC m=+151.953512637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.558458 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:13 crc kubenswrapper[4828]: E1205 19:06:13.558807 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:14.058791654 +0000 UTC m=+151.954013960 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.606936 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mz8xv" podStartSLOduration=131.606920573 podStartE2EDuration="2m11.606920573s" podCreationTimestamp="2025-12-05 19:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:13.556945235 +0000 UTC m=+151.452167541" watchObservedRunningTime="2025-12-05 19:06:13.606920573 +0000 UTC m=+151.502142879" Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.665036 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:13 crc kubenswrapper[4828]: E1205 19:06:13.665469 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:14.165452403 +0000 UTC m=+152.060674709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.767075 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:13 crc kubenswrapper[4828]: E1205 19:06:13.767603 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:14.267592394 +0000 UTC m=+152.162814710 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.800193 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-hlnsw" Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.827541 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:06:13 crc kubenswrapper[4828]: I1205 19:06:13.870301 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:13 crc kubenswrapper[4828]: E1205 19:06:13.870644 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:14.370629248 +0000 UTC m=+152.265851554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.079211 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:14 crc kubenswrapper[4828]: E1205 19:06:14.079532 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:14.57952048 +0000 UTC m=+152.474742786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.117094 4828 patch_prober.go:28] interesting pod/router-default-5444994796-ws4t8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 19:06:14 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Dec 05 19:06:14 crc kubenswrapper[4828]: [+]process-running ok Dec 05 19:06:14 crc kubenswrapper[4828]: healthz check failed Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.117144 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ws4t8" podUID="54ac7001-a300-4f21-b1dd-486db1c1e641" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.180603 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:14 crc kubenswrapper[4828]: E1205 19:06:14.181197 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:14.681180289 +0000 UTC m=+152.576402595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.281954 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:14 crc kubenswrapper[4828]: E1205 19:06:14.282496 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:14.782484278 +0000 UTC m=+152.677706584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.300738 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k42q6" event={"ID":"14ac26da-2122-48f7-8e83-1acb41418490","Type":"ContainerStarted","Data":"5fa398c0c03bb9802ffa52a9be08037d53aca73cfebf10adeb9c96e4f10ca766"} Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.319337 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9knxn" event={"ID":"b1061190-bb41-45c6-99f8-977e2dd1df5b","Type":"ContainerStarted","Data":"5c99a09a8b0d1d9d79bd42040a3ac23beba453e39834b6a0ce035f36ce8ec9a6"} Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.333654 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k42q6" podStartSLOduration=132.333631996 podStartE2EDuration="2m12.333631996s" podCreationTimestamp="2025-12-05 19:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:14.332938107 +0000 UTC m=+152.228160413" watchObservedRunningTime="2025-12-05 19:06:14.333631996 +0000 UTC m=+152.228854302" Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.341232 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-dz45b" event={"ID":"70fb8e9d-6b1f-40ff-9e85-3ed28f99f5d5","Type":"ContainerStarted","Data":"4f25f2a2f1ab27013a1185f4d74f9fef3f736e593dd865e3909bb9e55a739275"} Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.359335 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqxhr" event={"ID":"12be153a-7f4d-4521-b4d9-def127e51cd5","Type":"ContainerStarted","Data":"dfd54cb23eaf882caa2309d5ee646b9b3f74da97f272c2ac91272f913e73f9c1"} Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.385295 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:14 crc kubenswrapper[4828]: E1205 19:06:14.386699 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:14.886683833 +0000 UTC m=+152.781906139 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.388915 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-9knxn" podStartSLOduration=132.388906231 podStartE2EDuration="2m12.388906231s" podCreationTimestamp="2025-12-05 19:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:14.388359537 +0000 UTC m=+152.283581853" watchObservedRunningTime="2025-12-05 19:06:14.388906231 +0000 UTC m=+152.284128537" Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.440152 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-dz45b" podStartSLOduration=7.440133631 podStartE2EDuration="7.440133631s" podCreationTimestamp="2025-12-05 19:06:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:14.440024648 +0000 UTC m=+152.335246954" watchObservedRunningTime="2025-12-05 19:06:14.440133631 +0000 UTC m=+152.335355937" Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.464876 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rpjqf" event={"ID":"6c46f04e-0d87-4198-8eca-6000b06409c0","Type":"ContainerStarted","Data":"dc5bb728e9b906bfc9845c9b1f696b5e51a1b307c58ec4ae095e0b34d90d37de"} Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.465201 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l26k6" event={"ID":"b8462a37-7879-45ab-91e6-29ae835c9771","Type":"ContainerStarted","Data":"30be4843dae7417b37f4f6882f692c485cc9188c40d48b99986949ad7c1a7b76"} Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.465922 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l26k6" Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.470724 4828 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-l26k6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" start-of-body= Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.470774 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l26k6" podUID="b8462a37-7879-45ab-91e6-29ae835c9771" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.475897 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-9xcvs" event={"ID":"46c916dc-b6ce-4e56-9ef3-e15d778d7173","Type":"ContainerStarted","Data":"a6dee589ea33f67c46d53360f29d526c81f7588c1a2d59fe6c7cc950d8059ae7"} Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.475939 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-9xcvs" event={"ID":"46c916dc-b6ce-4e56-9ef3-e15d778d7173","Type":"ContainerStarted","Data":"9054379fb7fad9332ae845c72c3eb8549869699f075500441fd41ff894e6d39f"} Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.489783 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.490168 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rpjqf" podStartSLOduration=132.490152139 podStartE2EDuration="2m12.490152139s" podCreationTimestamp="2025-12-05 19:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:14.489203664 +0000 UTC m=+152.384425970" watchObservedRunningTime="2025-12-05 19:06:14.490152139 +0000 UTC m=+152.385374445" Dec 05 19:06:14 crc kubenswrapper[4828]: E1205 19:06:14.491114 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:14.991095333 +0000 UTC m=+152.886317719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.511510 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bbvkt" event={"ID":"0e55c6a6-4c4a-4548-ba89-a3a34c49124d","Type":"ContainerStarted","Data":"181b4be25c26f36d2d6b15edc10b50d89ecfa0b0d82a0cbf43e2d1552f0cff30"} Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.541445 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" event={"ID":"724b0be1-4d6c-4b96-933d-d94b8f146bd8","Type":"ContainerStarted","Data":"2bccd5347826c3bc30288ef23a5e626f5f60ae70e357dab84488c56d759a6f3d"} Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.597248 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:14 crc kubenswrapper[4828]: E1205 19:06:14.597559 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:15.097542437 +0000 UTC m=+152.992764743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.598554 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:14 crc kubenswrapper[4828]: E1205 19:06:14.598872 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:15.098858161 +0000 UTC m=+152.994080547 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.620098 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tphbb" event={"ID":"dd698da2-83bb-4051-b007-bbf97441a6b1","Type":"ContainerStarted","Data":"7882be97c831c9c4a03cfebf3cb40050a4aedb738204f5f1bb1bfce1e6e68569"} Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.620158 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tphbb" event={"ID":"dd698da2-83bb-4051-b007-bbf97441a6b1","Type":"ContainerStarted","Data":"ff56fc6c53004f99558fc27b6f54bd2a9b632734f77a27a4b285ef0a27bca6bc"} Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.620987 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l26k6" podStartSLOduration=132.62096756 podStartE2EDuration="2m12.62096756s" podCreationTimestamp="2025-12-05 19:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:14.620687512 +0000 UTC m=+152.515909828" watchObservedRunningTime="2025-12-05 19:06:14.62096756 +0000 UTC m=+152.516189866" Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.622385 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-9xcvs" podStartSLOduration=132.622377397 podStartE2EDuration="2m12.622377397s" podCreationTimestamp="2025-12-05 19:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:14.539415577 +0000 UTC m=+152.434637893" watchObservedRunningTime="2025-12-05 19:06:14.622377397 +0000 UTC m=+152.517599703" Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.662375 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-twqzk" event={"ID":"c48b7dac-cabb-40fb-bb18-a587cd1a3184","Type":"ContainerStarted","Data":"fcf717f3241216987680f8b9bbd6bc779fbb86ae6e4d0d4d0114902e33dbac18"} Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.663522 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-twqzk" Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.692992 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gcfs9" event={"ID":"6af58b85-a73a-4b78-b663-5e996f555e93","Type":"ContainerStarted","Data":"12eca01d9f474577e907054ae8b114dab42e50994a3c4867d889b27db931a317"} Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.693703 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gcfs9" Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.701257 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:14 crc kubenswrapper[4828]: E1205 19:06:14.701950 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:15.201934627 +0000 UTC m=+153.097156933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.706028 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jcw75" event={"ID":"2a401879-c671-46e3-a1ba-d6dbffb5ca5d","Type":"ContainerStarted","Data":"b385ee8c9d99ce9a7473c0a70abaa7cc860271cd6d9198a442b91dc419c0d57c"} Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.708460 4828 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hcgdz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.708533 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hcgdz" podUID="ed1e0a7a-7a77-4343-8c33-e921e149ddab" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.711686 4828 patch_prober.go:28] interesting pod/downloads-7954f5f757-f24wr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.711752 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f24wr" podUID="dd460169-7ac7-48de-95a0-4c8ec9fd2d31" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.774286 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-twqzk" Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.814682 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:14 crc kubenswrapper[4828]: E1205 19:06:14.818510 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:15.318496625 +0000 UTC m=+153.213718931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.903971 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tphbb" podStartSLOduration=132.903949029 podStartE2EDuration="2m12.903949029s" podCreationTimestamp="2025-12-05 19:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:14.812118738 +0000 UTC m=+152.707341044" watchObservedRunningTime="2025-12-05 19:06:14.903949029 +0000 UTC m=+152.799171335" Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.916087 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:14 crc kubenswrapper[4828]: E1205 19:06:14.916373 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:15.416356834 +0000 UTC m=+153.311579130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.918916 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k624z"] Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.919790 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k624z" Dec 05 19:06:14 crc kubenswrapper[4828]: I1205 19:06:14.960888 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.017094 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.017139 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mvvm\" (UniqueName: \"kubernetes.io/projected/16299781-5338-4577-8a9a-2ec82c3b25b8-kube-api-access-4mvvm\") pod \"certified-operators-k624z\" (UID: \"16299781-5338-4577-8a9a-2ec82c3b25b8\") " pod="openshift-marketplace/certified-operators-k624z" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.017160 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16299781-5338-4577-8a9a-2ec82c3b25b8-utilities\") pod \"certified-operators-k624z\" (UID: \"16299781-5338-4577-8a9a-2ec82c3b25b8\") " pod="openshift-marketplace/certified-operators-k624z" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.017182 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16299781-5338-4577-8a9a-2ec82c3b25b8-catalog-content\") pod \"certified-operators-k624z\" (UID: \"16299781-5338-4577-8a9a-2ec82c3b25b8\") " pod="openshift-marketplace/certified-operators-k624z" Dec 05 19:06:15 crc kubenswrapper[4828]: E1205 19:06:15.017538 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:15.517525029 +0000 UTC m=+153.412747335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.030045 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l5xtb"] Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.032009 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k624z"] Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.032189 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l5xtb" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.033448 4828 patch_prober.go:28] interesting pod/router-default-5444994796-ws4t8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 19:06:15 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Dec 05 19:06:15 crc kubenswrapper[4828]: [+]process-running ok Dec 05 19:06:15 crc kubenswrapper[4828]: healthz check failed Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.033491 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ws4t8" podUID="54ac7001-a300-4f21-b1dd-486db1c1e641" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.051069 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.052115 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bbvkt" podStartSLOduration=133.052102293 podStartE2EDuration="2m13.052102293s" podCreationTimestamp="2025-12-05 19:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:15.051541719 +0000 UTC m=+152.946764025" watchObservedRunningTime="2025-12-05 19:06:15.052102293 +0000 UTC m=+152.947324589" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.108478 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l5xtb"] Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.118448 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.118690 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef-catalog-content\") pod \"community-operators-l5xtb\" (UID: \"2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef\") " pod="openshift-marketplace/community-operators-l5xtb" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.118752 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef-utilities\") pod \"community-operators-l5xtb\" (UID: \"2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef\") " pod="openshift-marketplace/community-operators-l5xtb" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.118786 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mvvm\" (UniqueName: \"kubernetes.io/projected/16299781-5338-4577-8a9a-2ec82c3b25b8-kube-api-access-4mvvm\") pod \"certified-operators-k624z\" (UID: \"16299781-5338-4577-8a9a-2ec82c3b25b8\") " pod="openshift-marketplace/certified-operators-k624z" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.118816 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16299781-5338-4577-8a9a-2ec82c3b25b8-utilities\") pod \"certified-operators-k624z\" (UID: \"16299781-5338-4577-8a9a-2ec82c3b25b8\") " pod="openshift-marketplace/certified-operators-k624z" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.118862 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdhhq\" (UniqueName: \"kubernetes.io/projected/2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef-kube-api-access-sdhhq\") pod \"community-operators-l5xtb\" (UID: \"2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef\") " pod="openshift-marketplace/community-operators-l5xtb" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.118886 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16299781-5338-4577-8a9a-2ec82c3b25b8-catalog-content\") pod \"certified-operators-k624z\" (UID: \"16299781-5338-4577-8a9a-2ec82c3b25b8\") " pod="openshift-marketplace/certified-operators-k624z" Dec 05 19:06:15 crc kubenswrapper[4828]: E1205 19:06:15.119228 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:15.619198428 +0000 UTC m=+153.514420734 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.119398 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16299781-5338-4577-8a9a-2ec82c3b25b8-catalog-content\") pod \"certified-operators-k624z\" (UID: \"16299781-5338-4577-8a9a-2ec82c3b25b8\") " pod="openshift-marketplace/certified-operators-k624z" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.120111 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16299781-5338-4577-8a9a-2ec82c3b25b8-utilities\") pod \"certified-operators-k624z\" (UID: \"16299781-5338-4577-8a9a-2ec82c3b25b8\") " pod="openshift-marketplace/certified-operators-k624z" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.202297 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ggntr"] Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.203648 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ggntr" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.209937 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" podStartSLOduration=133.209919821 podStartE2EDuration="2m13.209919821s" podCreationTimestamp="2025-12-05 19:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:15.203658086 +0000 UTC m=+153.098880392" watchObservedRunningTime="2025-12-05 19:06:15.209919821 +0000 UTC m=+153.105142127" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.219757 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.219798 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef-catalog-content\") pod \"community-operators-l5xtb\" (UID: \"2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef\") " pod="openshift-marketplace/community-operators-l5xtb" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.219837 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef-utilities\") pod \"community-operators-l5xtb\" (UID: \"2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef\") " pod="openshift-marketplace/community-operators-l5xtb" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.219885 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdhhq\" (UniqueName: \"kubernetes.io/projected/2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef-kube-api-access-sdhhq\") pod \"community-operators-l5xtb\" (UID: \"2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef\") " pod="openshift-marketplace/community-operators-l5xtb" Dec 05 19:06:15 crc kubenswrapper[4828]: E1205 19:06:15.220495 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:15.720480986 +0000 UTC m=+153.615703292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.221082 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef-catalog-content\") pod \"community-operators-l5xtb\" (UID: \"2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef\") " pod="openshift-marketplace/community-operators-l5xtb" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.221350 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef-utilities\") pod \"community-operators-l5xtb\" (UID: \"2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef\") " pod="openshift-marketplace/community-operators-l5xtb" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.227344 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ggntr"] Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.265186 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mvvm\" (UniqueName: \"kubernetes.io/projected/16299781-5338-4577-8a9a-2ec82c3b25b8-kube-api-access-4mvvm\") pod \"certified-operators-k624z\" (UID: \"16299781-5338-4577-8a9a-2ec82c3b25b8\") " pod="openshift-marketplace/certified-operators-k624z" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.285102 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k624z" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.327435 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.327734 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/947f583f-02d4-4dde-9dd3-3991bad7d22e-catalog-content\") pod \"certified-operators-ggntr\" (UID: \"947f583f-02d4-4dde-9dd3-3991bad7d22e\") " pod="openshift-marketplace/certified-operators-ggntr" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.327767 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/947f583f-02d4-4dde-9dd3-3991bad7d22e-utilities\") pod \"certified-operators-ggntr\" (UID: \"947f583f-02d4-4dde-9dd3-3991bad7d22e\") " pod="openshift-marketplace/certified-operators-ggntr" Dec 05 19:06:15 crc kubenswrapper[4828]: E1205 19:06:15.327863 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:15.827806933 +0000 UTC m=+153.723029239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.327937 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvmz9\" (UniqueName: \"kubernetes.io/projected/947f583f-02d4-4dde-9dd3-3991bad7d22e-kube-api-access-dvmz9\") pod \"certified-operators-ggntr\" (UID: \"947f583f-02d4-4dde-9dd3-3991bad7d22e\") " pod="openshift-marketplace/certified-operators-ggntr" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.328041 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.328224 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gcfs9" podStartSLOduration=133.328204703 podStartE2EDuration="2m13.328204703s" podCreationTimestamp="2025-12-05 19:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:15.279481229 +0000 UTC m=+153.174703545" watchObservedRunningTime="2025-12-05 19:06:15.328204703 +0000 UTC m=+153.223427009" Dec 05 19:06:15 crc kubenswrapper[4828]: E1205 19:06:15.328357 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:15.828350157 +0000 UTC m=+153.723572463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.330059 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fk7dk"] Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.331069 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fk7dk" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.351372 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdhhq\" (UniqueName: \"kubernetes.io/projected/2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef-kube-api-access-sdhhq\") pod \"community-operators-l5xtb\" (UID: \"2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef\") " pod="openshift-marketplace/community-operators-l5xtb" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.351764 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l5xtb" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.384676 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fk7dk"] Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.385836 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-twqzk" podStartSLOduration=133.385804869 podStartE2EDuration="2m13.385804869s" podCreationTimestamp="2025-12-05 19:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:15.376905777 +0000 UTC m=+153.272128083" watchObservedRunningTime="2025-12-05 19:06:15.385804869 +0000 UTC m=+153.281027175" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.430172 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.430392 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/947f583f-02d4-4dde-9dd3-3991bad7d22e-catalog-content\") pod \"certified-operators-ggntr\" (UID: \"947f583f-02d4-4dde-9dd3-3991bad7d22e\") " pod="openshift-marketplace/certified-operators-ggntr" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.430413 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/947f583f-02d4-4dde-9dd3-3991bad7d22e-utilities\") pod \"certified-operators-ggntr\" (UID: \"947f583f-02d4-4dde-9dd3-3991bad7d22e\") " pod="openshift-marketplace/certified-operators-ggntr" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.430429 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvmz9\" (UniqueName: \"kubernetes.io/projected/947f583f-02d4-4dde-9dd3-3991bad7d22e-kube-api-access-dvmz9\") pod \"certified-operators-ggntr\" (UID: \"947f583f-02d4-4dde-9dd3-3991bad7d22e\") " pod="openshift-marketplace/certified-operators-ggntr" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.430479 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10ab3b3d-1c03-4d60-8a16-c34b4a313e7b-catalog-content\") pod \"community-operators-fk7dk\" (UID: \"10ab3b3d-1c03-4d60-8a16-c34b4a313e7b\") " pod="openshift-marketplace/community-operators-fk7dk" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.430503 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10ab3b3d-1c03-4d60-8a16-c34b4a313e7b-utilities\") pod \"community-operators-fk7dk\" (UID: \"10ab3b3d-1c03-4d60-8a16-c34b4a313e7b\") " pod="openshift-marketplace/community-operators-fk7dk" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.430522 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5276z\" (UniqueName: \"kubernetes.io/projected/10ab3b3d-1c03-4d60-8a16-c34b4a313e7b-kube-api-access-5276z\") pod \"community-operators-fk7dk\" (UID: \"10ab3b3d-1c03-4d60-8a16-c34b4a313e7b\") " pod="openshift-marketplace/community-operators-fk7dk" Dec 05 19:06:15 crc kubenswrapper[4828]: E1205 19:06:15.430613 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:15.930597751 +0000 UTC m=+153.825820057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.430962 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/947f583f-02d4-4dde-9dd3-3991bad7d22e-catalog-content\") pod \"certified-operators-ggntr\" (UID: \"947f583f-02d4-4dde-9dd3-3991bad7d22e\") " pod="openshift-marketplace/certified-operators-ggntr" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.431148 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/947f583f-02d4-4dde-9dd3-3991bad7d22e-utilities\") pod \"certified-operators-ggntr\" (UID: \"947f583f-02d4-4dde-9dd3-3991bad7d22e\") " pod="openshift-marketplace/certified-operators-ggntr" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.492804 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvmz9\" (UniqueName: \"kubernetes.io/projected/947f583f-02d4-4dde-9dd3-3991bad7d22e-kube-api-access-dvmz9\") pod \"certified-operators-ggntr\" (UID: \"947f583f-02d4-4dde-9dd3-3991bad7d22e\") " pod="openshift-marketplace/certified-operators-ggntr" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.509972 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gd82" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.530683 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ggntr" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.532179 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.532315 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10ab3b3d-1c03-4d60-8a16-c34b4a313e7b-catalog-content\") pod \"community-operators-fk7dk\" (UID: \"10ab3b3d-1c03-4d60-8a16-c34b4a313e7b\") " pod="openshift-marketplace/community-operators-fk7dk" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.532458 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10ab3b3d-1c03-4d60-8a16-c34b4a313e7b-utilities\") pod \"community-operators-fk7dk\" (UID: \"10ab3b3d-1c03-4d60-8a16-c34b4a313e7b\") " pod="openshift-marketplace/community-operators-fk7dk" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.532584 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5276z\" (UniqueName: \"kubernetes.io/projected/10ab3b3d-1c03-4d60-8a16-c34b4a313e7b-kube-api-access-5276z\") pod \"community-operators-fk7dk\" (UID: \"10ab3b3d-1c03-4d60-8a16-c34b4a313e7b\") " pod="openshift-marketplace/community-operators-fk7dk" Dec 05 19:06:15 crc kubenswrapper[4828]: E1205 19:06:15.533510 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:16.033343618 +0000 UTC m=+153.928565934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.534251 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10ab3b3d-1c03-4d60-8a16-c34b4a313e7b-catalog-content\") pod \"community-operators-fk7dk\" (UID: \"10ab3b3d-1c03-4d60-8a16-c34b4a313e7b\") " pod="openshift-marketplace/community-operators-fk7dk" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.534651 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10ab3b3d-1c03-4d60-8a16-c34b4a313e7b-utilities\") pod \"community-operators-fk7dk\" (UID: \"10ab3b3d-1c03-4d60-8a16-c34b4a313e7b\") " pod="openshift-marketplace/community-operators-fk7dk" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.603495 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5276z\" (UniqueName: \"kubernetes.io/projected/10ab3b3d-1c03-4d60-8a16-c34b4a313e7b-kube-api-access-5276z\") pod \"community-operators-fk7dk\" (UID: \"10ab3b3d-1c03-4d60-8a16-c34b4a313e7b\") " pod="openshift-marketplace/community-operators-fk7dk" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.633327 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:15 crc kubenswrapper[4828]: E1205 19:06:15.633788 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:16.133769414 +0000 UTC m=+154.028991720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.718034 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fk7dk" Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.734665 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:15 crc kubenswrapper[4828]: E1205 19:06:15.735020 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:16.235009692 +0000 UTC m=+154.130231998 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.839386 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:15 crc kubenswrapper[4828]: E1205 19:06:15.839665 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:16.339652078 +0000 UTC m=+154.234874384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.843655 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" event={"ID":"ffe87ead-c1e1-4126-8c85-3054648d6990","Type":"ContainerStarted","Data":"651596e70b2f3db034e76b92c5a41bd2b5b1bd0b6ac73c5c17e7e1dab04ebcaa"} Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.843710 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" event={"ID":"ffe87ead-c1e1-4126-8c85-3054648d6990","Type":"ContainerStarted","Data":"36b46343558228c938801a48ecec369973c54a87c9a5c27bd7903df17890aac1"} Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.897926 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bbvkt" event={"ID":"0e55c6a6-4c4a-4548-ba89-a3a34c49124d","Type":"ContainerStarted","Data":"afe5f4d9368ae04cb1426f18582e6898af7934597a29c71805e06c9e5d15cadd"} Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.942056 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:15 crc kubenswrapper[4828]: E1205 19:06:15.942380 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:16.442366394 +0000 UTC m=+154.337588700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:15 crc kubenswrapper[4828]: I1205 19:06:15.957101 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gvjlq" event={"ID":"55005e7b-f061-4065-8e7c-dd418b7fd072","Type":"ContainerStarted","Data":"33b6a0001b35008e74e1fe813478ecedb433a810b2c3202f2f37b903a516a2c7"} Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.028973 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jcw75" event={"ID":"2a401879-c671-46e3-a1ba-d6dbffb5ca5d","Type":"ContainerStarted","Data":"ec2dbad784aefe4f03b1926299b8ae9c83bf019652dceff3e83f65e6fe382419"} Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.029361 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-jcw75" Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.033235 4828 patch_prober.go:28] interesting pod/router-default-5444994796-ws4t8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 19:06:16 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Dec 05 19:06:16 crc kubenswrapper[4828]: [+]process-running ok Dec 05 19:06:16 crc kubenswrapper[4828]: healthz check failed Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.033291 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ws4t8" podUID="54ac7001-a300-4f21-b1dd-486db1c1e641" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.042919 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:16 crc kubenswrapper[4828]: E1205 19:06:16.044442 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:16.544420793 +0000 UTC m=+154.439643099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.059973 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqxhr" event={"ID":"12be153a-7f4d-4521-b4d9-def127e51cd5","Type":"ContainerStarted","Data":"b908de7c6354f7807f42b01f25819ede2bda4879d25a3be7e5286427754b2769"} Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.094158 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" podStartSLOduration=135.094138612 podStartE2EDuration="2m15.094138612s" podCreationTimestamp="2025-12-05 19:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:15.879706425 +0000 UTC m=+153.774928731" watchObservedRunningTime="2025-12-05 19:06:16.094138612 +0000 UTC m=+153.989360918" Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.094514 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-jcw75" podStartSLOduration=9.094505713 podStartE2EDuration="9.094505713s" podCreationTimestamp="2025-12-05 19:06:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:16.080946368 +0000 UTC m=+153.976168664" watchObservedRunningTime="2025-12-05 19:06:16.094505713 +0000 UTC m=+153.989728359" Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.108670 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sjlcb" event={"ID":"1f228cca-e005-4036-916e-10c7d1f9da1e","Type":"ContainerStarted","Data":"889e16f9c655294618a4cf47a4d8b39fc1a991fedf56782f2e97ec566148a5f5"} Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.121320 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hcgdz" Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.144415 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:16 crc kubenswrapper[4828]: E1205 19:06:16.148334 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:16.64832002 +0000 UTC m=+154.543542326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.152656 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqxhr" podStartSLOduration=134.152640502 podStartE2EDuration="2m14.152640502s" podCreationTimestamp="2025-12-05 19:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:16.108526609 +0000 UTC m=+154.003748915" watchObservedRunningTime="2025-12-05 19:06:16.152640502 +0000 UTC m=+154.047862808" Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.246507 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:16 crc kubenswrapper[4828]: E1205 19:06:16.247070 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:16.747050842 +0000 UTC m=+154.642273148 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.286278 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sjlcb" podStartSLOduration=134.286224906 podStartE2EDuration="2m14.286224906s" podCreationTimestamp="2025-12-05 19:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:16.172867332 +0000 UTC m=+154.068089638" watchObservedRunningTime="2025-12-05 19:06:16.286224906 +0000 UTC m=+154.181447212" Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.313770 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k624z"] Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.346808 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ggntr"] Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.347807 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:16 crc kubenswrapper[4828]: E1205 19:06:16.348161 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:16.848131665 +0000 UTC m=+154.743353971 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.359768 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l5xtb"] Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.454642 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:16 crc kubenswrapper[4828]: E1205 19:06:16.454988 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:16.954959128 +0000 UTC m=+154.850181434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.455123 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:16 crc kubenswrapper[4828]: E1205 19:06:16.455404 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:16.955393289 +0000 UTC m=+154.850615595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.560303 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:16 crc kubenswrapper[4828]: E1205 19:06:16.560621 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:17.0606071 +0000 UTC m=+154.955829406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.588637 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fk7dk"] Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.662439 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:16 crc kubenswrapper[4828]: E1205 19:06:16.662778 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:17.162764022 +0000 UTC m=+155.057986328 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.763402 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:16 crc kubenswrapper[4828]: E1205 19:06:16.763561 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:17.263538257 +0000 UTC m=+155.158760563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.763658 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:16 crc kubenswrapper[4828]: E1205 19:06:16.764009 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:17.26400033 +0000 UTC m=+155.159222636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.864997 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:16 crc kubenswrapper[4828]: E1205 19:06:16.865180 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:17.365153844 +0000 UTC m=+155.260376140 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.865272 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:16 crc kubenswrapper[4828]: E1205 19:06:16.865583 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:17.365569565 +0000 UTC m=+155.260791871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.924548 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9zlnj"] Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.925445 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9zlnj" Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.926794 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.934469 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9zlnj"] Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.966205 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:16 crc kubenswrapper[4828]: E1205 19:06:16.966487 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:17.466462713 +0000 UTC m=+155.361685019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.966692 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee23b6fa-d318-49d6-91fb-1dacad01ad5f-utilities\") pod \"redhat-marketplace-9zlnj\" (UID: \"ee23b6fa-d318-49d6-91fb-1dacad01ad5f\") " pod="openshift-marketplace/redhat-marketplace-9zlnj" Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.966732 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee23b6fa-d318-49d6-91fb-1dacad01ad5f-catalog-content\") pod \"redhat-marketplace-9zlnj\" (UID: \"ee23b6fa-d318-49d6-91fb-1dacad01ad5f\") " pod="openshift-marketplace/redhat-marketplace-9zlnj" Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.966789 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gbc2\" (UniqueName: \"kubernetes.io/projected/ee23b6fa-d318-49d6-91fb-1dacad01ad5f-kube-api-access-4gbc2\") pod \"redhat-marketplace-9zlnj\" (UID: \"ee23b6fa-d318-49d6-91fb-1dacad01ad5f\") " pod="openshift-marketplace/redhat-marketplace-9zlnj" Dec 05 19:06:16 crc kubenswrapper[4828]: I1205 19:06:16.966835 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:16 crc kubenswrapper[4828]: E1205 19:06:16.967077 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:17.46706664 +0000 UTC m=+155.362288946 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.029426 4828 patch_prober.go:28] interesting pod/router-default-5444994796-ws4t8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 19:06:17 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Dec 05 19:06:17 crc kubenswrapper[4828]: [+]process-running ok Dec 05 19:06:17 crc kubenswrapper[4828]: healthz check failed Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.029482 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ws4t8" podUID="54ac7001-a300-4f21-b1dd-486db1c1e641" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.068173 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:17 crc kubenswrapper[4828]: E1205 19:06:17.068367 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:17.568340998 +0000 UTC m=+155.463563304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.068392 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee23b6fa-d318-49d6-91fb-1dacad01ad5f-catalog-content\") pod \"redhat-marketplace-9zlnj\" (UID: \"ee23b6fa-d318-49d6-91fb-1dacad01ad5f\") " pod="openshift-marketplace/redhat-marketplace-9zlnj" Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.068463 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gbc2\" (UniqueName: \"kubernetes.io/projected/ee23b6fa-d318-49d6-91fb-1dacad01ad5f-kube-api-access-4gbc2\") pod \"redhat-marketplace-9zlnj\" (UID: \"ee23b6fa-d318-49d6-91fb-1dacad01ad5f\") " pod="openshift-marketplace/redhat-marketplace-9zlnj" Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.068501 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.068532 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee23b6fa-d318-49d6-91fb-1dacad01ad5f-utilities\") pod \"redhat-marketplace-9zlnj\" (UID: \"ee23b6fa-d318-49d6-91fb-1dacad01ad5f\") " pod="openshift-marketplace/redhat-marketplace-9zlnj" Dec 05 19:06:17 crc kubenswrapper[4828]: E1205 19:06:17.068802 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:17.568790059 +0000 UTC m=+155.464012365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.069129 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee23b6fa-d318-49d6-91fb-1dacad01ad5f-utilities\") pod \"redhat-marketplace-9zlnj\" (UID: \"ee23b6fa-d318-49d6-91fb-1dacad01ad5f\") " pod="openshift-marketplace/redhat-marketplace-9zlnj" Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.069357 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee23b6fa-d318-49d6-91fb-1dacad01ad5f-catalog-content\") pod \"redhat-marketplace-9zlnj\" (UID: \"ee23b6fa-d318-49d6-91fb-1dacad01ad5f\") " pod="openshift-marketplace/redhat-marketplace-9zlnj" Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.104617 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gbc2\" (UniqueName: \"kubernetes.io/projected/ee23b6fa-d318-49d6-91fb-1dacad01ad5f-kube-api-access-4gbc2\") pod \"redhat-marketplace-9zlnj\" (UID: \"ee23b6fa-d318-49d6-91fb-1dacad01ad5f\") " pod="openshift-marketplace/redhat-marketplace-9zlnj" Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.111639 4828 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-l26k6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.111753 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l26k6" podUID="b8462a37-7879-45ab-91e6-29ae835c9771" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.123876 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fk7dk" event={"ID":"10ab3b3d-1c03-4d60-8a16-c34b4a313e7b","Type":"ContainerStarted","Data":"7192461de6764ea6b1085ed13f143c0a41f8b083e74ef2c214945bbf23cbbbfe"} Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.125666 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggntr" event={"ID":"947f583f-02d4-4dde-9dd3-3991bad7d22e","Type":"ContainerStarted","Data":"cc54fdc14961bb0297b612538f89f4559f386f493e1fe2088f570eb55db4ffb7"} Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.126773 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k624z" event={"ID":"16299781-5338-4577-8a9a-2ec82c3b25b8","Type":"ContainerStarted","Data":"96f34d19564ad5dc743e774c848541c62904a14d1c945830eaf1e44ab3fbb105"} Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.127802 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5xtb" event={"ID":"2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef","Type":"ContainerStarted","Data":"a963efdfa9402b0d7709559694430189572bfe45ee1483b0dc219d5cc9dc0731"} Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.170718 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:17 crc kubenswrapper[4828]: E1205 19:06:17.172560 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:17.672538043 +0000 UTC m=+155.567760409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.237805 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9zlnj" Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.272586 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:17 crc kubenswrapper[4828]: E1205 19:06:17.273201 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:17.773188474 +0000 UTC m=+155.668410770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.317937 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gcmtf"] Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.318907 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gcmtf" Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.325538 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gcmtf"] Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.373554 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.373849 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3748abcf-49c3-4f2f-a663-3256367817b0-catalog-content\") pod \"redhat-marketplace-gcmtf\" (UID: \"3748abcf-49c3-4f2f-a663-3256367817b0\") " pod="openshift-marketplace/redhat-marketplace-gcmtf" Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.373937 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-554w6\" (UniqueName: \"kubernetes.io/projected/3748abcf-49c3-4f2f-a663-3256367817b0-kube-api-access-554w6\") pod \"redhat-marketplace-gcmtf\" (UID: \"3748abcf-49c3-4f2f-a663-3256367817b0\") " pod="openshift-marketplace/redhat-marketplace-gcmtf" Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.374027 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3748abcf-49c3-4f2f-a663-3256367817b0-utilities\") pod \"redhat-marketplace-gcmtf\" (UID: \"3748abcf-49c3-4f2f-a663-3256367817b0\") " pod="openshift-marketplace/redhat-marketplace-gcmtf" Dec 05 19:06:17 crc kubenswrapper[4828]: E1205 19:06:17.374162 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:17.874145895 +0000 UTC m=+155.769368211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.383147 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l26k6" Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.476355 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-554w6\" (UniqueName: \"kubernetes.io/projected/3748abcf-49c3-4f2f-a663-3256367817b0-kube-api-access-554w6\") pod \"redhat-marketplace-gcmtf\" (UID: \"3748abcf-49c3-4f2f-a663-3256367817b0\") " pod="openshift-marketplace/redhat-marketplace-gcmtf" Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.476426 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3748abcf-49c3-4f2f-a663-3256367817b0-utilities\") pod \"redhat-marketplace-gcmtf\" (UID: \"3748abcf-49c3-4f2f-a663-3256367817b0\") " pod="openshift-marketplace/redhat-marketplace-gcmtf" Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.476446 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3748abcf-49c3-4f2f-a663-3256367817b0-catalog-content\") pod \"redhat-marketplace-gcmtf\" (UID: \"3748abcf-49c3-4f2f-a663-3256367817b0\") " pod="openshift-marketplace/redhat-marketplace-gcmtf" Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.476474 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:17 crc kubenswrapper[4828]: E1205 19:06:17.476723 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:17.976711917 +0000 UTC m=+155.871934223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.477334 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3748abcf-49c3-4f2f-a663-3256367817b0-utilities\") pod \"redhat-marketplace-gcmtf\" (UID: \"3748abcf-49c3-4f2f-a663-3256367817b0\") " pod="openshift-marketplace/redhat-marketplace-gcmtf" Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.477411 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3748abcf-49c3-4f2f-a663-3256367817b0-catalog-content\") pod \"redhat-marketplace-gcmtf\" (UID: \"3748abcf-49c3-4f2f-a663-3256367817b0\") " pod="openshift-marketplace/redhat-marketplace-gcmtf" Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.547242 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-554w6\" (UniqueName: \"kubernetes.io/projected/3748abcf-49c3-4f2f-a663-3256367817b0-kube-api-access-554w6\") pod \"redhat-marketplace-gcmtf\" (UID: \"3748abcf-49c3-4f2f-a663-3256367817b0\") " pod="openshift-marketplace/redhat-marketplace-gcmtf" Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.581356 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:17 crc kubenswrapper[4828]: E1205 19:06:17.581664 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:18.081649151 +0000 UTC m=+155.976871457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.589341 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9zlnj"] Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.658944 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gcmtf" Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.682380 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:17 crc kubenswrapper[4828]: E1205 19:06:17.682706 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:18.182692263 +0000 UTC m=+156.077914569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:17 crc kubenswrapper[4828]: W1205 19:06:17.683021 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee23b6fa_d318_49d6_91fb_1dacad01ad5f.slice/crio-f38d4ce4656861d8956ea66528a5a67798df8c4c2ed13134967063b2a2141648 WatchSource:0}: Error finding container f38d4ce4656861d8956ea66528a5a67798df8c4c2ed13134967063b2a2141648: Status 404 returned error can't find the container with id f38d4ce4656861d8956ea66528a5a67798df8c4c2ed13134967063b2a2141648 Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.784048 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:17 crc kubenswrapper[4828]: E1205 19:06:17.784775 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:18.284755602 +0000 UTC m=+156.179977898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.886277 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:17 crc kubenswrapper[4828]: E1205 19:06:17.886665 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:18.386651777 +0000 UTC m=+156.281874083 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.920391 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v2zjd"] Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.921889 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v2zjd" Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.924140 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.932034 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v2zjd"] Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.994523 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.994816 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8f1d24b-86b2-4b82-a397-85027c6090f0-catalog-content\") pod \"redhat-operators-v2zjd\" (UID: \"a8f1d24b-86b2-4b82-a397-85027c6090f0\") " pod="openshift-marketplace/redhat-operators-v2zjd" Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.994900 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8f1d24b-86b2-4b82-a397-85027c6090f0-utilities\") pod \"redhat-operators-v2zjd\" (UID: \"a8f1d24b-86b2-4b82-a397-85027c6090f0\") " pod="openshift-marketplace/redhat-operators-v2zjd" Dec 05 19:06:17 crc kubenswrapper[4828]: I1205 19:06:17.995019 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzvkz\" (UniqueName: \"kubernetes.io/projected/a8f1d24b-86b2-4b82-a397-85027c6090f0-kube-api-access-wzvkz\") pod \"redhat-operators-v2zjd\" (UID: \"a8f1d24b-86b2-4b82-a397-85027c6090f0\") " pod="openshift-marketplace/redhat-operators-v2zjd" Dec 05 19:06:17 crc kubenswrapper[4828]: E1205 19:06:17.995133 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:18.495117223 +0000 UTC m=+156.390339529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.029965 4828 patch_prober.go:28] interesting pod/router-default-5444994796-ws4t8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 19:06:18 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Dec 05 19:06:18 crc kubenswrapper[4828]: [+]process-running ok Dec 05 19:06:18 crc kubenswrapper[4828]: healthz check failed Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.030229 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ws4t8" podUID="54ac7001-a300-4f21-b1dd-486db1c1e641" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.079541 4828 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.096150 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzvkz\" (UniqueName: \"kubernetes.io/projected/a8f1d24b-86b2-4b82-a397-85027c6090f0-kube-api-access-wzvkz\") pod \"redhat-operators-v2zjd\" (UID: \"a8f1d24b-86b2-4b82-a397-85027c6090f0\") " pod="openshift-marketplace/redhat-operators-v2zjd" Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.096202 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8f1d24b-86b2-4b82-a397-85027c6090f0-catalog-content\") pod \"redhat-operators-v2zjd\" (UID: \"a8f1d24b-86b2-4b82-a397-85027c6090f0\") " pod="openshift-marketplace/redhat-operators-v2zjd" Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.096249 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8f1d24b-86b2-4b82-a397-85027c6090f0-utilities\") pod \"redhat-operators-v2zjd\" (UID: \"a8f1d24b-86b2-4b82-a397-85027c6090f0\") " pod="openshift-marketplace/redhat-operators-v2zjd" Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.096292 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.096749 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8f1d24b-86b2-4b82-a397-85027c6090f0-catalog-content\") pod \"redhat-operators-v2zjd\" (UID: \"a8f1d24b-86b2-4b82-a397-85027c6090f0\") " pod="openshift-marketplace/redhat-operators-v2zjd" Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.096793 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8f1d24b-86b2-4b82-a397-85027c6090f0-utilities\") pod \"redhat-operators-v2zjd\" (UID: \"a8f1d24b-86b2-4b82-a397-85027c6090f0\") " pod="openshift-marketplace/redhat-operators-v2zjd" Dec 05 19:06:18 crc kubenswrapper[4828]: E1205 19:06:18.096949 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:18.596936375 +0000 UTC m=+156.492158681 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.131558 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzvkz\" (UniqueName: \"kubernetes.io/projected/a8f1d24b-86b2-4b82-a397-85027c6090f0-kube-api-access-wzvkz\") pod \"redhat-operators-v2zjd\" (UID: \"a8f1d24b-86b2-4b82-a397-85027c6090f0\") " pod="openshift-marketplace/redhat-operators-v2zjd" Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.145945 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gcmtf"] Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.146069 4828 generic.go:334] "Generic (PLEG): container finished" podID="16299781-5338-4577-8a9a-2ec82c3b25b8" containerID="5d610ecffb4f0132832ee2ede40d97a28d633f4fdcefcdd080eace2252f312b1" exitCode=0 Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.146133 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k624z" event={"ID":"16299781-5338-4577-8a9a-2ec82c3b25b8","Type":"ContainerDied","Data":"5d610ecffb4f0132832ee2ede40d97a28d633f4fdcefcdd080eace2252f312b1"} Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.149294 4828 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.150184 4828 generic.go:334] "Generic (PLEG): container finished" podID="2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef" containerID="12c4e02ff89ceb7aacb7bc534d7e598911fd9945878cc69434cb3a24650b470e" exitCode=0 Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.150234 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5xtb" event={"ID":"2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef","Type":"ContainerDied","Data":"12c4e02ff89ceb7aacb7bc534d7e598911fd9945878cc69434cb3a24650b470e"} Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.157763 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zlnj" event={"ID":"ee23b6fa-d318-49d6-91fb-1dacad01ad5f","Type":"ContainerStarted","Data":"7828b439706cd72464043848f5e34ef594c8ef4c4750dce22996d8cb0727488b"} Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.157805 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zlnj" event={"ID":"ee23b6fa-d318-49d6-91fb-1dacad01ad5f","Type":"ContainerStarted","Data":"f38d4ce4656861d8956ea66528a5a67798df8c4c2ed13134967063b2a2141648"} Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.180081 4828 generic.go:334] "Generic (PLEG): container finished" podID="10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" containerID="c19d3201e5ca7ed8271b8939a41f8e7bc382479a07c7b28cc1b51b4fb1135cdf" exitCode=0 Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.180163 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fk7dk" event={"ID":"10ab3b3d-1c03-4d60-8a16-c34b4a313e7b","Type":"ContainerDied","Data":"c19d3201e5ca7ed8271b8939a41f8e7bc382479a07c7b28cc1b51b4fb1135cdf"} Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.198246 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:18 crc kubenswrapper[4828]: E1205 19:06:18.199335 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:18.699318343 +0000 UTC m=+156.594540649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.215123 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gvjlq" event={"ID":"55005e7b-f061-4065-8e7c-dd418b7fd072","Type":"ContainerStarted","Data":"7af1eaf7d46dfe67df2f34caf31945bd37bbbb86f36539b327977fdb4084e86b"} Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.220123 4828 generic.go:334] "Generic (PLEG): container finished" podID="947f583f-02d4-4dde-9dd3-3991bad7d22e" containerID="9690cfe6489ea1da71b12646ce1044a24b43116781d8adc2df5b36f34cafc74c" exitCode=0 Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.221188 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggntr" event={"ID":"947f583f-02d4-4dde-9dd3-3991bad7d22e","Type":"ContainerDied","Data":"9690cfe6489ea1da71b12646ce1044a24b43116781d8adc2df5b36f34cafc74c"} Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.258501 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v2zjd" Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.300342 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:18 crc kubenswrapper[4828]: E1205 19:06:18.300618 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:18.800604901 +0000 UTC m=+156.695827207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.325170 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ggj5p"] Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.326495 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggj5p" Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.331562 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ggj5p"] Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.401375 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:18 crc kubenswrapper[4828]: E1205 19:06:18.401561 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:18.90153465 +0000 UTC m=+156.796756956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.401654 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr5fk\" (UniqueName: \"kubernetes.io/projected/20686f13-e212-4a7c-8a64-8f8674311fff-kube-api-access-kr5fk\") pod \"redhat-operators-ggj5p\" (UID: \"20686f13-e212-4a7c-8a64-8f8674311fff\") " pod="openshift-marketplace/redhat-operators-ggj5p" Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.401701 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20686f13-e212-4a7c-8a64-8f8674311fff-catalog-content\") pod \"redhat-operators-ggj5p\" (UID: \"20686f13-e212-4a7c-8a64-8f8674311fff\") " pod="openshift-marketplace/redhat-operators-ggj5p" Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.401720 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20686f13-e212-4a7c-8a64-8f8674311fff-utilities\") pod \"redhat-operators-ggj5p\" (UID: \"20686f13-e212-4a7c-8a64-8f8674311fff\") " pod="openshift-marketplace/redhat-operators-ggj5p" Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.401758 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:18 crc kubenswrapper[4828]: E1205 19:06:18.402143 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:18.902130246 +0000 UTC m=+156.797352552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.502568 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.502777 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr5fk\" (UniqueName: \"kubernetes.io/projected/20686f13-e212-4a7c-8a64-8f8674311fff-kube-api-access-kr5fk\") pod \"redhat-operators-ggj5p\" (UID: \"20686f13-e212-4a7c-8a64-8f8674311fff\") " pod="openshift-marketplace/redhat-operators-ggj5p" Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.502858 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20686f13-e212-4a7c-8a64-8f8674311fff-catalog-content\") pod \"redhat-operators-ggj5p\" (UID: \"20686f13-e212-4a7c-8a64-8f8674311fff\") " pod="openshift-marketplace/redhat-operators-ggj5p" Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.502887 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20686f13-e212-4a7c-8a64-8f8674311fff-utilities\") pod \"redhat-operators-ggj5p\" (UID: \"20686f13-e212-4a7c-8a64-8f8674311fff\") " pod="openshift-marketplace/redhat-operators-ggj5p" Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.503403 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20686f13-e212-4a7c-8a64-8f8674311fff-utilities\") pod \"redhat-operators-ggj5p\" (UID: \"20686f13-e212-4a7c-8a64-8f8674311fff\") " pod="openshift-marketplace/redhat-operators-ggj5p" Dec 05 19:06:18 crc kubenswrapper[4828]: E1205 19:06:18.503454 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 19:06:19.003435035 +0000 UTC m=+156.898657341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.503679 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20686f13-e212-4a7c-8a64-8f8674311fff-catalog-content\") pod \"redhat-operators-ggj5p\" (UID: \"20686f13-e212-4a7c-8a64-8f8674311fff\") " pod="openshift-marketplace/redhat-operators-ggj5p" Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.534948 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr5fk\" (UniqueName: \"kubernetes.io/projected/20686f13-e212-4a7c-8a64-8f8674311fff-kube-api-access-kr5fk\") pod \"redhat-operators-ggj5p\" (UID: \"20686f13-e212-4a7c-8a64-8f8674311fff\") " pod="openshift-marketplace/redhat-operators-ggj5p" Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.604604 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:18 crc kubenswrapper[4828]: E1205 19:06:18.605058 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 19:06:19.105043462 +0000 UTC m=+157.000265768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wk88t" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.657433 4828 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-05T19:06:18.079576911Z","Handler":null,"Name":""} Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.661319 4828 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.661364 4828 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.706115 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.737203 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.743733 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v2zjd"] Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.788477 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggj5p" Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.808026 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.815039 4828 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.815077 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:18 crc kubenswrapper[4828]: I1205 19:06:18.863764 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wk88t\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.030071 4828 patch_prober.go:28] interesting pod/router-default-5444994796-ws4t8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 19:06:19 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Dec 05 19:06:19 crc kubenswrapper[4828]: [+]process-running ok Dec 05 19:06:19 crc kubenswrapper[4828]: healthz check failed Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.030322 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ws4t8" podUID="54ac7001-a300-4f21-b1dd-486db1c1e641" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.147771 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.169426 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ggj5p"] Dec 05 19:06:19 crc kubenswrapper[4828]: W1205 19:06:19.181362 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20686f13_e212_4a7c_8a64_8f8674311fff.slice/crio-c8076a7759f14b85b9e7068f9853c6b1ecd5d5fb2fb247c5fbfbfe35608dbb22 WatchSource:0}: Error finding container c8076a7759f14b85b9e7068f9853c6b1ecd5d5fb2fb247c5fbfbfe35608dbb22: Status 404 returned error can't find the container with id c8076a7759f14b85b9e7068f9853c6b1ecd5d5fb2fb247c5fbfbfe35608dbb22 Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.196624 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.197554 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.239889 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.295147 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gvjlq" event={"ID":"55005e7b-f061-4065-8e7c-dd418b7fd072","Type":"ContainerStarted","Data":"da77cdce8c147fad21a6919d8286be4158fe910b349414e509dc4fb20ad291c1"} Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.295183 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gvjlq" event={"ID":"55005e7b-f061-4065-8e7c-dd418b7fd072","Type":"ContainerStarted","Data":"5da6202d6444046763ba3f793fa2f75b07f0ece7076e84a5e793ae81847e3a8b"} Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.298462 4828 generic.go:334] "Generic (PLEG): container finished" podID="a8f1d24b-86b2-4b82-a397-85027c6090f0" containerID="1273993ba1a492beba1c123b69aaa2f215c9ba27cf5565c8f4e8dd86ffda8986" exitCode=0 Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.298498 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v2zjd" event={"ID":"a8f1d24b-86b2-4b82-a397-85027c6090f0","Type":"ContainerDied","Data":"1273993ba1a492beba1c123b69aaa2f215c9ba27cf5565c8f4e8dd86ffda8986"} Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.298512 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v2zjd" event={"ID":"a8f1d24b-86b2-4b82-a397-85027c6090f0","Type":"ContainerStarted","Data":"582ae2eb18a7a2f402d846274b685876050267049f92fa57ec2ffb7dd534708d"} Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.300056 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggj5p" event={"ID":"20686f13-e212-4a7c-8a64-8f8674311fff","Type":"ContainerStarted","Data":"c8076a7759f14b85b9e7068f9853c6b1ecd5d5fb2fb247c5fbfbfe35608dbb22"} Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.301031 4828 generic.go:334] "Generic (PLEG): container finished" podID="3748abcf-49c3-4f2f-a663-3256367817b0" containerID="dd8eba71c6f6c9b7fd750b3c198eb4c8313bb54468edde6fe693322b46a4efad" exitCode=0 Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.301061 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gcmtf" event={"ID":"3748abcf-49c3-4f2f-a663-3256367817b0","Type":"ContainerDied","Data":"dd8eba71c6f6c9b7fd750b3c198eb4c8313bb54468edde6fe693322b46a4efad"} Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.301123 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gcmtf" event={"ID":"3748abcf-49c3-4f2f-a663-3256367817b0","Type":"ContainerStarted","Data":"bddf82d233c4aedf3620f5d1fcd854c2cda4d48c89c3017f47d86f631450a7b9"} Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.302806 4828 generic.go:334] "Generic (PLEG): container finished" podID="ee23b6fa-d318-49d6-91fb-1dacad01ad5f" containerID="7828b439706cd72464043848f5e34ef594c8ef4c4750dce22996d8cb0727488b" exitCode=0 Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.302849 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zlnj" event={"ID":"ee23b6fa-d318-49d6-91fb-1dacad01ad5f","Type":"ContainerDied","Data":"7828b439706cd72464043848f5e34ef594c8ef4c4750dce22996d8cb0727488b"} Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.304270 4828 generic.go:334] "Generic (PLEG): container finished" podID="b5ed41b4-64e6-407a-b3a5-104f2b97b008" containerID="9802d3655f2ed9c9a92b4650dee78a473b5b30178c1546757deaa1bf9b8f1f6b" exitCode=0 Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.304946 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416020-s5gks" event={"ID":"b5ed41b4-64e6-407a-b3a5-104f2b97b008","Type":"ContainerDied","Data":"9802d3655f2ed9c9a92b4650dee78a473b5b30178c1546757deaa1bf9b8f1f6b"} Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.309479 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-smwg8" Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.334409 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-gvjlq" podStartSLOduration=12.334393515 podStartE2EDuration="12.334393515s" podCreationTimestamp="2025-12-05 19:06:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:19.330858572 +0000 UTC m=+157.226080878" watchObservedRunningTime="2025-12-05 19:06:19.334393515 +0000 UTC m=+157.229615821" Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.390311 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-q9sfv" Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.391314 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-q9sfv" Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.416362 4828 patch_prober.go:28] interesting pod/console-f9d7485db-q9sfv container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.416416 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-q9sfv" podUID="4f8576a7-5291-4b1f-a06c-35395fa9c9dd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.546219 4828 patch_prober.go:28] interesting pod/downloads-7954f5f757-f24wr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.546275 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-f24wr" podUID="dd460169-7ac7-48de-95a0-4c8ec9fd2d31" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.546302 4828 patch_prober.go:28] interesting pod/downloads-7954f5f757-f24wr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.546395 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f24wr" podUID="dd460169-7ac7-48de-95a0-4c8ec9fd2d31" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.617006 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.617358 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.635586 4828 patch_prober.go:28] interesting pod/apiserver-76f77b778f-rkdvk container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 05 19:06:19 crc kubenswrapper[4828]: [+]log ok Dec 05 19:06:19 crc kubenswrapper[4828]: [+]etcd ok Dec 05 19:06:19 crc kubenswrapper[4828]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 05 19:06:19 crc kubenswrapper[4828]: [+]poststarthook/generic-apiserver-start-informers ok Dec 05 19:06:19 crc kubenswrapper[4828]: [+]poststarthook/max-in-flight-filter ok Dec 05 19:06:19 crc kubenswrapper[4828]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 05 19:06:19 crc kubenswrapper[4828]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 05 19:06:19 crc kubenswrapper[4828]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 05 19:06:19 crc kubenswrapper[4828]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Dec 05 19:06:19 crc kubenswrapper[4828]: [+]poststarthook/project.openshift.io-projectcache ok Dec 05 19:06:19 crc kubenswrapper[4828]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 05 19:06:19 crc kubenswrapper[4828]: [+]poststarthook/openshift.io-startinformers ok Dec 05 19:06:19 crc kubenswrapper[4828]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 05 19:06:19 crc kubenswrapper[4828]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 05 19:06:19 crc kubenswrapper[4828]: livez check failed Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.635643 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" podUID="ffe87ead-c1e1-4126-8c85-3054648d6990" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 19:06:19 crc kubenswrapper[4828]: I1205 19:06:19.744226 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wk88t"] Dec 05 19:06:20 crc kubenswrapper[4828]: I1205 19:06:20.024619 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-ws4t8" Dec 05 19:06:20 crc kubenswrapper[4828]: I1205 19:06:20.027226 4828 patch_prober.go:28] interesting pod/router-default-5444994796-ws4t8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 19:06:20 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Dec 05 19:06:20 crc kubenswrapper[4828]: [+]process-running ok Dec 05 19:06:20 crc kubenswrapper[4828]: healthz check failed Dec 05 19:06:20 crc kubenswrapper[4828]: I1205 19:06:20.027263 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ws4t8" podUID="54ac7001-a300-4f21-b1dd-486db1c1e641" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 19:06:20 crc kubenswrapper[4828]: I1205 19:06:20.312076 4828 generic.go:334] "Generic (PLEG): container finished" podID="20686f13-e212-4a7c-8a64-8f8674311fff" containerID="3a1d72b6f1a1eaed5b80ad80f826f5d82e33afb5f8bac08273effe5e4d052e27" exitCode=0 Dec 05 19:06:20 crc kubenswrapper[4828]: I1205 19:06:20.312135 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggj5p" event={"ID":"20686f13-e212-4a7c-8a64-8f8674311fff","Type":"ContainerDied","Data":"3a1d72b6f1a1eaed5b80ad80f826f5d82e33afb5f8bac08273effe5e4d052e27"} Dec 05 19:06:20 crc kubenswrapper[4828]: I1205 19:06:20.317733 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" event={"ID":"17e0a9b8-d746-4a17-a424-122b5c30ce75","Type":"ContainerStarted","Data":"487cbd7007bf7752bc5d7e03a232dd7f3c3742965eefca67c3b85ef2a97c9d42"} Dec 05 19:06:20 crc kubenswrapper[4828]: I1205 19:06:20.317765 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" event={"ID":"17e0a9b8-d746-4a17-a424-122b5c30ce75","Type":"ContainerStarted","Data":"0322e428fd130c5c57be85540610fc781b7637ea49b2765877a0086f7847615b"} Dec 05 19:06:20 crc kubenswrapper[4828]: I1205 19:06:20.318146 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:20 crc kubenswrapper[4828]: I1205 19:06:20.361962 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" podStartSLOduration=139.361946054 podStartE2EDuration="2m19.361946054s" podCreationTimestamp="2025-12-05 19:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:20.36101613 +0000 UTC m=+158.256238436" watchObservedRunningTime="2025-12-05 19:06:20.361946054 +0000 UTC m=+158.257168360" Dec 05 19:06:20 crc kubenswrapper[4828]: I1205 19:06:20.467591 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 05 19:06:21 crc kubenswrapper[4828]: I1205 19:06:21.017580 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416020-s5gks" Dec 05 19:06:21 crc kubenswrapper[4828]: I1205 19:06:21.046994 4828 patch_prober.go:28] interesting pod/router-default-5444994796-ws4t8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 19:06:21 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Dec 05 19:06:21 crc kubenswrapper[4828]: [+]process-running ok Dec 05 19:06:21 crc kubenswrapper[4828]: healthz check failed Dec 05 19:06:21 crc kubenswrapper[4828]: I1205 19:06:21.047055 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ws4t8" podUID="54ac7001-a300-4f21-b1dd-486db1c1e641" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 19:06:21 crc kubenswrapper[4828]: I1205 19:06:21.215230 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5ed41b4-64e6-407a-b3a5-104f2b97b008-secret-volume\") pod \"b5ed41b4-64e6-407a-b3a5-104f2b97b008\" (UID: \"b5ed41b4-64e6-407a-b3a5-104f2b97b008\") " Dec 05 19:06:21 crc kubenswrapper[4828]: I1205 19:06:21.215344 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvxdl\" (UniqueName: \"kubernetes.io/projected/b5ed41b4-64e6-407a-b3a5-104f2b97b008-kube-api-access-tvxdl\") pod \"b5ed41b4-64e6-407a-b3a5-104f2b97b008\" (UID: \"b5ed41b4-64e6-407a-b3a5-104f2b97b008\") " Dec 05 19:06:21 crc kubenswrapper[4828]: I1205 19:06:21.215375 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5ed41b4-64e6-407a-b3a5-104f2b97b008-config-volume\") pod \"b5ed41b4-64e6-407a-b3a5-104f2b97b008\" (UID: \"b5ed41b4-64e6-407a-b3a5-104f2b97b008\") " Dec 05 19:06:21 crc kubenswrapper[4828]: I1205 19:06:21.219581 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5ed41b4-64e6-407a-b3a5-104f2b97b008-config-volume" (OuterVolumeSpecName: "config-volume") pod "b5ed41b4-64e6-407a-b3a5-104f2b97b008" (UID: "b5ed41b4-64e6-407a-b3a5-104f2b97b008"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:06:21 crc kubenswrapper[4828]: I1205 19:06:21.246205 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5ed41b4-64e6-407a-b3a5-104f2b97b008-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b5ed41b4-64e6-407a-b3a5-104f2b97b008" (UID: "b5ed41b4-64e6-407a-b3a5-104f2b97b008"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:06:21 crc kubenswrapper[4828]: I1205 19:06:21.247481 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5ed41b4-64e6-407a-b3a5-104f2b97b008-kube-api-access-tvxdl" (OuterVolumeSpecName: "kube-api-access-tvxdl") pod "b5ed41b4-64e6-407a-b3a5-104f2b97b008" (UID: "b5ed41b4-64e6-407a-b3a5-104f2b97b008"). InnerVolumeSpecName "kube-api-access-tvxdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:06:21 crc kubenswrapper[4828]: I1205 19:06:21.317146 4828 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5ed41b4-64e6-407a-b3a5-104f2b97b008-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 19:06:21 crc kubenswrapper[4828]: I1205 19:06:21.317174 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvxdl\" (UniqueName: \"kubernetes.io/projected/b5ed41b4-64e6-407a-b3a5-104f2b97b008-kube-api-access-tvxdl\") on node \"crc\" DevicePath \"\"" Dec 05 19:06:21 crc kubenswrapper[4828]: I1205 19:06:21.317182 4828 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5ed41b4-64e6-407a-b3a5-104f2b97b008-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 19:06:21 crc kubenswrapper[4828]: I1205 19:06:21.346413 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 05 19:06:21 crc kubenswrapper[4828]: E1205 19:06:21.346653 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5ed41b4-64e6-407a-b3a5-104f2b97b008" containerName="collect-profiles" Dec 05 19:06:21 crc kubenswrapper[4828]: I1205 19:06:21.346670 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5ed41b4-64e6-407a-b3a5-104f2b97b008" containerName="collect-profiles" Dec 05 19:06:21 crc kubenswrapper[4828]: I1205 19:06:21.346801 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5ed41b4-64e6-407a-b3a5-104f2b97b008" containerName="collect-profiles" Dec 05 19:06:21 crc kubenswrapper[4828]: I1205 19:06:21.346972 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416020-s5gks" Dec 05 19:06:21 crc kubenswrapper[4828]: I1205 19:06:21.347899 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416020-s5gks" event={"ID":"b5ed41b4-64e6-407a-b3a5-104f2b97b008","Type":"ContainerDied","Data":"67b1bd38b1ae4d8d074f6b933aa07e5264ea22dfa0676cdd6c66ee0866601c24"} Dec 05 19:06:21 crc kubenswrapper[4828]: I1205 19:06:21.348046 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67b1bd38b1ae4d8d074f6b933aa07e5264ea22dfa0676cdd6c66ee0866601c24" Dec 05 19:06:21 crc kubenswrapper[4828]: I1205 19:06:21.348011 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 19:06:21 crc kubenswrapper[4828]: I1205 19:06:21.350651 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 05 19:06:21 crc kubenswrapper[4828]: I1205 19:06:21.350727 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 05 19:06:21 crc kubenswrapper[4828]: I1205 19:06:21.356536 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 05 19:06:21 crc kubenswrapper[4828]: I1205 19:06:21.518702 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f361a9bc-ac07-4de3-9f58-e796f85f7405-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f361a9bc-ac07-4de3-9f58-e796f85f7405\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 19:06:21 crc kubenswrapper[4828]: I1205 19:06:21.519663 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f361a9bc-ac07-4de3-9f58-e796f85f7405-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f361a9bc-ac07-4de3-9f58-e796f85f7405\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 19:06:21 crc kubenswrapper[4828]: I1205 19:06:21.620722 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f361a9bc-ac07-4de3-9f58-e796f85f7405-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f361a9bc-ac07-4de3-9f58-e796f85f7405\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 19:06:21 crc kubenswrapper[4828]: I1205 19:06:21.620875 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f361a9bc-ac07-4de3-9f58-e796f85f7405-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f361a9bc-ac07-4de3-9f58-e796f85f7405\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 19:06:21 crc kubenswrapper[4828]: I1205 19:06:21.620958 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f361a9bc-ac07-4de3-9f58-e796f85f7405-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f361a9bc-ac07-4de3-9f58-e796f85f7405\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 19:06:21 crc kubenswrapper[4828]: I1205 19:06:21.636469 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f361a9bc-ac07-4de3-9f58-e796f85f7405-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f361a9bc-ac07-4de3-9f58-e796f85f7405\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 19:06:21 crc kubenswrapper[4828]: I1205 19:06:21.697398 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 19:06:22 crc kubenswrapper[4828]: I1205 19:06:22.025808 4828 patch_prober.go:28] interesting pod/router-default-5444994796-ws4t8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 19:06:22 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Dec 05 19:06:22 crc kubenswrapper[4828]: [+]process-running ok Dec 05 19:06:22 crc kubenswrapper[4828]: healthz check failed Dec 05 19:06:22 crc kubenswrapper[4828]: I1205 19:06:22.025873 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ws4t8" podUID="54ac7001-a300-4f21-b1dd-486db1c1e641" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 19:06:22 crc kubenswrapper[4828]: I1205 19:06:22.233351 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 05 19:06:22 crc kubenswrapper[4828]: I1205 19:06:22.356030 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f361a9bc-ac07-4de3-9f58-e796f85f7405","Type":"ContainerStarted","Data":"7f751d6eba930a4428a9bd55324c11dfb71c35c579566eb5887e3093cc6742c6"} Dec 05 19:06:23 crc kubenswrapper[4828]: I1205 19:06:23.025751 4828 patch_prober.go:28] interesting pod/router-default-5444994796-ws4t8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 19:06:23 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Dec 05 19:06:23 crc kubenswrapper[4828]: [+]process-running ok Dec 05 19:06:23 crc kubenswrapper[4828]: healthz check failed Dec 05 19:06:23 crc kubenswrapper[4828]: I1205 19:06:23.026055 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ws4t8" podUID="54ac7001-a300-4f21-b1dd-486db1c1e641" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 19:06:23 crc kubenswrapper[4828]: I1205 19:06:23.202611 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 05 19:06:23 crc kubenswrapper[4828]: I1205 19:06:23.203582 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 19:06:23 crc kubenswrapper[4828]: I1205 19:06:23.207742 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 05 19:06:23 crc kubenswrapper[4828]: I1205 19:06:23.208429 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 05 19:06:23 crc kubenswrapper[4828]: I1205 19:06:23.211272 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 05 19:06:23 crc kubenswrapper[4828]: I1205 19:06:23.360992 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23c916de-c563-4074-a9f2-b5ac14f8ff45-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"23c916de-c563-4074-a9f2-b5ac14f8ff45\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 19:06:23 crc kubenswrapper[4828]: I1205 19:06:23.361059 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23c916de-c563-4074-a9f2-b5ac14f8ff45-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"23c916de-c563-4074-a9f2-b5ac14f8ff45\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 19:06:23 crc kubenswrapper[4828]: I1205 19:06:23.462019 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23c916de-c563-4074-a9f2-b5ac14f8ff45-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"23c916de-c563-4074-a9f2-b5ac14f8ff45\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 19:06:23 crc kubenswrapper[4828]: I1205 19:06:23.462100 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23c916de-c563-4074-a9f2-b5ac14f8ff45-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"23c916de-c563-4074-a9f2-b5ac14f8ff45\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 19:06:23 crc kubenswrapper[4828]: I1205 19:06:23.462543 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23c916de-c563-4074-a9f2-b5ac14f8ff45-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"23c916de-c563-4074-a9f2-b5ac14f8ff45\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 19:06:24 crc kubenswrapper[4828]: I1205 19:06:24.027023 4828 patch_prober.go:28] interesting pod/router-default-5444994796-ws4t8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 19:06:24 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Dec 05 19:06:24 crc kubenswrapper[4828]: [+]process-running ok Dec 05 19:06:24 crc kubenswrapper[4828]: healthz check failed Dec 05 19:06:24 crc kubenswrapper[4828]: I1205 19:06:24.027105 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ws4t8" podUID="54ac7001-a300-4f21-b1dd-486db1c1e641" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 19:06:24 crc kubenswrapper[4828]: I1205 19:06:24.403508 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23c916de-c563-4074-a9f2-b5ac14f8ff45-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"23c916de-c563-4074-a9f2-b5ac14f8ff45\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 19:06:24 crc kubenswrapper[4828]: I1205 19:06:24.626314 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:24 crc kubenswrapper[4828]: I1205 19:06:24.643886 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-rkdvk" Dec 05 19:06:24 crc kubenswrapper[4828]: I1205 19:06:24.668501 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 19:06:24 crc kubenswrapper[4828]: I1205 19:06:24.986577 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0595333b-a181-4a2b-90b8-e2accf80e78e-metrics-certs\") pod \"network-metrics-daemon-bvf6n\" (UID: \"0595333b-a181-4a2b-90b8-e2accf80e78e\") " pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:06:24 crc kubenswrapper[4828]: I1205 19:06:24.990994 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0595333b-a181-4a2b-90b8-e2accf80e78e-metrics-certs\") pod \"network-metrics-daemon-bvf6n\" (UID: \"0595333b-a181-4a2b-90b8-e2accf80e78e\") " pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:06:25 crc kubenswrapper[4828]: I1205 19:06:25.027184 4828 patch_prober.go:28] interesting pod/router-default-5444994796-ws4t8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 19:06:25 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Dec 05 19:06:25 crc kubenswrapper[4828]: [+]process-running ok Dec 05 19:06:25 crc kubenswrapper[4828]: healthz check failed Dec 05 19:06:25 crc kubenswrapper[4828]: I1205 19:06:25.027530 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ws4t8" podUID="54ac7001-a300-4f21-b1dd-486db1c1e641" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 19:06:25 crc kubenswrapper[4828]: I1205 19:06:25.189635 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bvf6n" Dec 05 19:06:25 crc kubenswrapper[4828]: I1205 19:06:25.213245 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 05 19:06:25 crc kubenswrapper[4828]: W1205 19:06:25.258347 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod23c916de_c563_4074_a9f2_b5ac14f8ff45.slice/crio-2095fb2321f5929180d9f1e2f271cf99138d6a127e7c59385c59c31227e6262d WatchSource:0}: Error finding container 2095fb2321f5929180d9f1e2f271cf99138d6a127e7c59385c59c31227e6262d: Status 404 returned error can't find the container with id 2095fb2321f5929180d9f1e2f271cf99138d6a127e7c59385c59c31227e6262d Dec 05 19:06:25 crc kubenswrapper[4828]: I1205 19:06:25.277928 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-jcw75" Dec 05 19:06:25 crc kubenswrapper[4828]: I1205 19:06:25.433129 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f361a9bc-ac07-4de3-9f58-e796f85f7405","Type":"ContainerStarted","Data":"4e624845868f2c9a64eb7476fef888777b3c61683cac61517a1304e2304983f3"} Dec 05 19:06:25 crc kubenswrapper[4828]: I1205 19:06:25.436991 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"23c916de-c563-4074-a9f2-b5ac14f8ff45","Type":"ContainerStarted","Data":"2095fb2321f5929180d9f1e2f271cf99138d6a127e7c59385c59c31227e6262d"} Dec 05 19:06:25 crc kubenswrapper[4828]: I1205 19:06:25.456182 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=4.456161136 podStartE2EDuration="4.456161136s" podCreationTimestamp="2025-12-05 19:06:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:25.450235622 +0000 UTC m=+163.345457938" watchObservedRunningTime="2025-12-05 19:06:25.456161136 +0000 UTC m=+163.351383442" Dec 05 19:06:25 crc kubenswrapper[4828]: I1205 19:06:25.682669 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bvf6n"] Dec 05 19:06:25 crc kubenswrapper[4828]: W1205 19:06:25.756883 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0595333b_a181_4a2b_90b8_e2accf80e78e.slice/crio-529e2c7aaa22e05bf5c835eea6f0be571d9e4dcf99dd482901707f3397c38e3c WatchSource:0}: Error finding container 529e2c7aaa22e05bf5c835eea6f0be571d9e4dcf99dd482901707f3397c38e3c: Status 404 returned error can't find the container with id 529e2c7aaa22e05bf5c835eea6f0be571d9e4dcf99dd482901707f3397c38e3c Dec 05 19:06:26 crc kubenswrapper[4828]: I1205 19:06:26.031998 4828 patch_prober.go:28] interesting pod/router-default-5444994796-ws4t8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 19:06:26 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Dec 05 19:06:26 crc kubenswrapper[4828]: [+]process-running ok Dec 05 19:06:26 crc kubenswrapper[4828]: healthz check failed Dec 05 19:06:26 crc kubenswrapper[4828]: I1205 19:06:26.032230 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ws4t8" podUID="54ac7001-a300-4f21-b1dd-486db1c1e641" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 19:06:26 crc kubenswrapper[4828]: I1205 19:06:26.477738 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bvf6n" event={"ID":"0595333b-a181-4a2b-90b8-e2accf80e78e","Type":"ContainerStarted","Data":"529e2c7aaa22e05bf5c835eea6f0be571d9e4dcf99dd482901707f3397c38e3c"} Dec 05 19:06:27 crc kubenswrapper[4828]: I1205 19:06:27.025589 4828 patch_prober.go:28] interesting pod/router-default-5444994796-ws4t8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 19:06:27 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Dec 05 19:06:27 crc kubenswrapper[4828]: [+]process-running ok Dec 05 19:06:27 crc kubenswrapper[4828]: healthz check failed Dec 05 19:06:27 crc kubenswrapper[4828]: I1205 19:06:27.025664 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ws4t8" podUID="54ac7001-a300-4f21-b1dd-486db1c1e641" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 19:06:27 crc kubenswrapper[4828]: I1205 19:06:27.496784 4828 generic.go:334] "Generic (PLEG): container finished" podID="f361a9bc-ac07-4de3-9f58-e796f85f7405" containerID="4e624845868f2c9a64eb7476fef888777b3c61683cac61517a1304e2304983f3" exitCode=0 Dec 05 19:06:27 crc kubenswrapper[4828]: I1205 19:06:27.496950 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f361a9bc-ac07-4de3-9f58-e796f85f7405","Type":"ContainerDied","Data":"4e624845868f2c9a64eb7476fef888777b3c61683cac61517a1304e2304983f3"} Dec 05 19:06:27 crc kubenswrapper[4828]: I1205 19:06:27.502322 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"23c916de-c563-4074-a9f2-b5ac14f8ff45","Type":"ContainerStarted","Data":"057486f2dc21259d9e994b72fffd19dd8ade27981a5c1757900ca40673f51140"} Dec 05 19:06:27 crc kubenswrapper[4828]: I1205 19:06:27.506570 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bvf6n" event={"ID":"0595333b-a181-4a2b-90b8-e2accf80e78e","Type":"ContainerStarted","Data":"f51464c4859c84bf5e98a3c3e18defea1328f3d8b5af082f10d71a3d66887856"} Dec 05 19:06:27 crc kubenswrapper[4828]: I1205 19:06:27.529923 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=4.529905744 podStartE2EDuration="4.529905744s" podCreationTimestamp="2025-12-05 19:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:06:27.527113171 +0000 UTC m=+165.422335477" watchObservedRunningTime="2025-12-05 19:06:27.529905744 +0000 UTC m=+165.425128060" Dec 05 19:06:28 crc kubenswrapper[4828]: I1205 19:06:28.036615 4828 patch_prober.go:28] interesting pod/router-default-5444994796-ws4t8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 19:06:28 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Dec 05 19:06:28 crc kubenswrapper[4828]: [+]process-running ok Dec 05 19:06:28 crc kubenswrapper[4828]: healthz check failed Dec 05 19:06:28 crc kubenswrapper[4828]: I1205 19:06:28.036903 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ws4t8" podUID="54ac7001-a300-4f21-b1dd-486db1c1e641" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 19:06:28 crc kubenswrapper[4828]: I1205 19:06:28.518102 4828 generic.go:334] "Generic (PLEG): container finished" podID="23c916de-c563-4074-a9f2-b5ac14f8ff45" containerID="057486f2dc21259d9e994b72fffd19dd8ade27981a5c1757900ca40673f51140" exitCode=0 Dec 05 19:06:28 crc kubenswrapper[4828]: I1205 19:06:28.518213 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"23c916de-c563-4074-a9f2-b5ac14f8ff45","Type":"ContainerDied","Data":"057486f2dc21259d9e994b72fffd19dd8ade27981a5c1757900ca40673f51140"} Dec 05 19:06:29 crc kubenswrapper[4828]: I1205 19:06:29.024875 4828 patch_prober.go:28] interesting pod/router-default-5444994796-ws4t8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 19:06:29 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Dec 05 19:06:29 crc kubenswrapper[4828]: [+]process-running ok Dec 05 19:06:29 crc kubenswrapper[4828]: healthz check failed Dec 05 19:06:29 crc kubenswrapper[4828]: I1205 19:06:29.024933 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ws4t8" podUID="54ac7001-a300-4f21-b1dd-486db1c1e641" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 19:06:29 crc kubenswrapper[4828]: I1205 19:06:29.396167 4828 patch_prober.go:28] interesting pod/console-f9d7485db-q9sfv container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 05 19:06:29 crc kubenswrapper[4828]: I1205 19:06:29.396228 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-q9sfv" podUID="4f8576a7-5291-4b1f-a06c-35395fa9c9dd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 05 19:06:29 crc kubenswrapper[4828]: I1205 19:06:29.548814 4828 patch_prober.go:28] interesting pod/downloads-7954f5f757-f24wr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Dec 05 19:06:29 crc kubenswrapper[4828]: I1205 19:06:29.548974 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f24wr" podUID="dd460169-7ac7-48de-95a0-4c8ec9fd2d31" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Dec 05 19:06:29 crc kubenswrapper[4828]: I1205 19:06:29.549503 4828 patch_prober.go:28] interesting pod/downloads-7954f5f757-f24wr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Dec 05 19:06:29 crc kubenswrapper[4828]: I1205 19:06:29.549556 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-f24wr" podUID="dd460169-7ac7-48de-95a0-4c8ec9fd2d31" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Dec 05 19:06:30 crc kubenswrapper[4828]: I1205 19:06:30.027733 4828 patch_prober.go:28] interesting pod/router-default-5444994796-ws4t8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 19:06:30 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Dec 05 19:06:30 crc kubenswrapper[4828]: [+]process-running ok Dec 05 19:06:30 crc kubenswrapper[4828]: healthz check failed Dec 05 19:06:30 crc kubenswrapper[4828]: I1205 19:06:30.027804 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ws4t8" podUID="54ac7001-a300-4f21-b1dd-486db1c1e641" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 19:06:31 crc kubenswrapper[4828]: I1205 19:06:31.029338 4828 patch_prober.go:28] interesting pod/router-default-5444994796-ws4t8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 19:06:31 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Dec 05 19:06:31 crc kubenswrapper[4828]: [+]process-running ok Dec 05 19:06:31 crc kubenswrapper[4828]: healthz check failed Dec 05 19:06:31 crc kubenswrapper[4828]: I1205 19:06:31.029401 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ws4t8" podUID="54ac7001-a300-4f21-b1dd-486db1c1e641" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 19:06:32 crc kubenswrapper[4828]: I1205 19:06:32.026104 4828 patch_prober.go:28] interesting pod/router-default-5444994796-ws4t8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 19:06:32 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Dec 05 19:06:32 crc kubenswrapper[4828]: [+]process-running ok Dec 05 19:06:32 crc kubenswrapper[4828]: healthz check failed Dec 05 19:06:32 crc kubenswrapper[4828]: I1205 19:06:32.026160 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ws4t8" podUID="54ac7001-a300-4f21-b1dd-486db1c1e641" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 19:06:33 crc kubenswrapper[4828]: I1205 19:06:33.025069 4828 patch_prober.go:28] interesting pod/router-default-5444994796-ws4t8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 19:06:33 crc kubenswrapper[4828]: [-]has-synced failed: reason withheld Dec 05 19:06:33 crc kubenswrapper[4828]: [+]process-running ok Dec 05 19:06:33 crc kubenswrapper[4828]: healthz check failed Dec 05 19:06:33 crc kubenswrapper[4828]: I1205 19:06:33.025154 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ws4t8" podUID="54ac7001-a300-4f21-b1dd-486db1c1e641" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 19:06:34 crc kubenswrapper[4828]: I1205 19:06:34.025240 4828 patch_prober.go:28] interesting pod/router-default-5444994796-ws4t8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 19:06:34 crc kubenswrapper[4828]: [+]has-synced ok Dec 05 19:06:34 crc kubenswrapper[4828]: [+]process-running ok Dec 05 19:06:34 crc kubenswrapper[4828]: healthz check failed Dec 05 19:06:34 crc kubenswrapper[4828]: I1205 19:06:34.025751 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ws4t8" podUID="54ac7001-a300-4f21-b1dd-486db1c1e641" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 19:06:35 crc kubenswrapper[4828]: I1205 19:06:35.027766 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-ws4t8" Dec 05 19:06:35 crc kubenswrapper[4828]: I1205 19:06:35.031643 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-ws4t8" Dec 05 19:06:35 crc kubenswrapper[4828]: I1205 19:06:35.259528 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:06:35 crc kubenswrapper[4828]: I1205 19:06:35.259746 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:06:38 crc kubenswrapper[4828]: I1205 19:06:38.875743 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 19:06:39 crc kubenswrapper[4828]: I1205 19:06:39.153267 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:06:39 crc kubenswrapper[4828]: I1205 19:06:39.550716 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-f24wr" Dec 05 19:06:40 crc kubenswrapper[4828]: I1205 19:06:40.352067 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-q9sfv" Dec 05 19:06:40 crc kubenswrapper[4828]: I1205 19:06:40.356797 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-q9sfv" Dec 05 19:06:44 crc kubenswrapper[4828]: I1205 19:06:44.423194 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 19:06:44 crc kubenswrapper[4828]: I1205 19:06:44.432252 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 19:06:44 crc kubenswrapper[4828]: I1205 19:06:44.532714 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23c916de-c563-4074-a9f2-b5ac14f8ff45-kube-api-access\") pod \"23c916de-c563-4074-a9f2-b5ac14f8ff45\" (UID: \"23c916de-c563-4074-a9f2-b5ac14f8ff45\") " Dec 05 19:06:44 crc kubenswrapper[4828]: I1205 19:06:44.533082 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f361a9bc-ac07-4de3-9f58-e796f85f7405-kubelet-dir\") pod \"f361a9bc-ac07-4de3-9f58-e796f85f7405\" (UID: \"f361a9bc-ac07-4de3-9f58-e796f85f7405\") " Dec 05 19:06:44 crc kubenswrapper[4828]: I1205 19:06:44.533124 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f361a9bc-ac07-4de3-9f58-e796f85f7405-kube-api-access\") pod \"f361a9bc-ac07-4de3-9f58-e796f85f7405\" (UID: \"f361a9bc-ac07-4de3-9f58-e796f85f7405\") " Dec 05 19:06:44 crc kubenswrapper[4828]: I1205 19:06:44.533162 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23c916de-c563-4074-a9f2-b5ac14f8ff45-kubelet-dir\") pod \"23c916de-c563-4074-a9f2-b5ac14f8ff45\" (UID: \"23c916de-c563-4074-a9f2-b5ac14f8ff45\") " Dec 05 19:06:44 crc kubenswrapper[4828]: I1205 19:06:44.533265 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f361a9bc-ac07-4de3-9f58-e796f85f7405-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f361a9bc-ac07-4de3-9f58-e796f85f7405" (UID: "f361a9bc-ac07-4de3-9f58-e796f85f7405"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:06:44 crc kubenswrapper[4828]: I1205 19:06:44.533972 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23c916de-c563-4074-a9f2-b5ac14f8ff45-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "23c916de-c563-4074-a9f2-b5ac14f8ff45" (UID: "23c916de-c563-4074-a9f2-b5ac14f8ff45"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:06:44 crc kubenswrapper[4828]: I1205 19:06:44.540690 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23c916de-c563-4074-a9f2-b5ac14f8ff45-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "23c916de-c563-4074-a9f2-b5ac14f8ff45" (UID: "23c916de-c563-4074-a9f2-b5ac14f8ff45"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:06:44 crc kubenswrapper[4828]: I1205 19:06:44.540788 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f361a9bc-ac07-4de3-9f58-e796f85f7405-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f361a9bc-ac07-4de3-9f58-e796f85f7405" (UID: "f361a9bc-ac07-4de3-9f58-e796f85f7405"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:06:44 crc kubenswrapper[4828]: I1205 19:06:44.542851 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23c916de-c563-4074-a9f2-b5ac14f8ff45-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 19:06:44 crc kubenswrapper[4828]: I1205 19:06:44.542885 4828 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f361a9bc-ac07-4de3-9f58-e796f85f7405-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 05 19:06:44 crc kubenswrapper[4828]: I1205 19:06:44.542896 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f361a9bc-ac07-4de3-9f58-e796f85f7405-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 19:06:44 crc kubenswrapper[4828]: I1205 19:06:44.542910 4828 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23c916de-c563-4074-a9f2-b5ac14f8ff45-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 05 19:06:44 crc kubenswrapper[4828]: I1205 19:06:44.622636 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f361a9bc-ac07-4de3-9f58-e796f85f7405","Type":"ContainerDied","Data":"7f751d6eba930a4428a9bd55324c11dfb71c35c579566eb5887e3093cc6742c6"} Dec 05 19:06:44 crc kubenswrapper[4828]: I1205 19:06:44.622713 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f751d6eba930a4428a9bd55324c11dfb71c35c579566eb5887e3093cc6742c6" Dec 05 19:06:44 crc kubenswrapper[4828]: I1205 19:06:44.622893 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 19:06:44 crc kubenswrapper[4828]: I1205 19:06:44.626245 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"23c916de-c563-4074-a9f2-b5ac14f8ff45","Type":"ContainerDied","Data":"2095fb2321f5929180d9f1e2f271cf99138d6a127e7c59385c59c31227e6262d"} Dec 05 19:06:44 crc kubenswrapper[4828]: I1205 19:06:44.626302 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2095fb2321f5929180d9f1e2f271cf99138d6a127e7c59385c59c31227e6262d" Dec 05 19:06:44 crc kubenswrapper[4828]: I1205 19:06:44.626382 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 19:06:50 crc kubenswrapper[4828]: I1205 19:06:50.104327 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gcfs9" Dec 05 19:06:57 crc kubenswrapper[4828]: I1205 19:06:57.008892 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 05 19:06:57 crc kubenswrapper[4828]: E1205 19:06:57.009321 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f361a9bc-ac07-4de3-9f58-e796f85f7405" containerName="pruner" Dec 05 19:06:57 crc kubenswrapper[4828]: I1205 19:06:57.009351 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f361a9bc-ac07-4de3-9f58-e796f85f7405" containerName="pruner" Dec 05 19:06:57 crc kubenswrapper[4828]: E1205 19:06:57.009402 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23c916de-c563-4074-a9f2-b5ac14f8ff45" containerName="pruner" Dec 05 19:06:57 crc kubenswrapper[4828]: I1205 19:06:57.009423 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="23c916de-c563-4074-a9f2-b5ac14f8ff45" containerName="pruner" Dec 05 19:06:57 crc kubenswrapper[4828]: I1205 19:06:57.009660 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="23c916de-c563-4074-a9f2-b5ac14f8ff45" containerName="pruner" Dec 05 19:06:57 crc kubenswrapper[4828]: I1205 19:06:57.009702 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="f361a9bc-ac07-4de3-9f58-e796f85f7405" containerName="pruner" Dec 05 19:06:57 crc kubenswrapper[4828]: I1205 19:06:57.010594 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 19:06:57 crc kubenswrapper[4828]: I1205 19:06:57.015089 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 05 19:06:57 crc kubenswrapper[4828]: I1205 19:06:57.018368 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 05 19:06:57 crc kubenswrapper[4828]: I1205 19:06:57.029915 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 05 19:06:57 crc kubenswrapper[4828]: I1205 19:06:57.103760 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ec13a6b-1e24-4cce-9757-2a78f16ca6bf-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3ec13a6b-1e24-4cce-9757-2a78f16ca6bf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 19:06:57 crc kubenswrapper[4828]: I1205 19:06:57.103999 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ec13a6b-1e24-4cce-9757-2a78f16ca6bf-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3ec13a6b-1e24-4cce-9757-2a78f16ca6bf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 19:06:57 crc kubenswrapper[4828]: I1205 19:06:57.205794 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ec13a6b-1e24-4cce-9757-2a78f16ca6bf-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3ec13a6b-1e24-4cce-9757-2a78f16ca6bf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 19:06:57 crc kubenswrapper[4828]: I1205 19:06:57.206063 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ec13a6b-1e24-4cce-9757-2a78f16ca6bf-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3ec13a6b-1e24-4cce-9757-2a78f16ca6bf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 19:06:57 crc kubenswrapper[4828]: I1205 19:06:57.206241 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ec13a6b-1e24-4cce-9757-2a78f16ca6bf-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3ec13a6b-1e24-4cce-9757-2a78f16ca6bf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 19:06:57 crc kubenswrapper[4828]: I1205 19:06:57.232422 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ec13a6b-1e24-4cce-9757-2a78f16ca6bf-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3ec13a6b-1e24-4cce-9757-2a78f16ca6bf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 19:06:57 crc kubenswrapper[4828]: I1205 19:06:57.347541 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 19:06:58 crc kubenswrapper[4828]: E1205 19:06:58.731434 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 05 19:06:58 crc kubenswrapper[4828]: E1205 19:06:58.732240 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dvmz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ggntr_openshift-marketplace(947f583f-02d4-4dde-9dd3-3991bad7d22e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 19:06:58 crc kubenswrapper[4828]: E1205 19:06:58.733675 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ggntr" podUID="947f583f-02d4-4dde-9dd3-3991bad7d22e" Dec 05 19:07:01 crc kubenswrapper[4828]: E1205 19:07:01.733242 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ggntr" podUID="947f583f-02d4-4dde-9dd3-3991bad7d22e" Dec 05 19:07:01 crc kubenswrapper[4828]: E1205 19:07:01.864750 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 05 19:07:01 crc kubenswrapper[4828]: E1205 19:07:01.864913 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sdhhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-l5xtb_openshift-marketplace(2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 19:07:01 crc kubenswrapper[4828]: E1205 19:07:01.866805 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-l5xtb" podUID="2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef" Dec 05 19:07:02 crc kubenswrapper[4828]: E1205 19:07:02.244557 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 05 19:07:02 crc kubenswrapper[4828]: E1205 19:07:02.245040 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4mvvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-k624z_openshift-marketplace(16299781-5338-4577-8a9a-2ec82c3b25b8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 19:07:02 crc kubenswrapper[4828]: E1205 19:07:02.246318 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-k624z" podUID="16299781-5338-4577-8a9a-2ec82c3b25b8" Dec 05 19:07:02 crc kubenswrapper[4828]: I1205 19:07:02.403346 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 05 19:07:02 crc kubenswrapper[4828]: I1205 19:07:02.404114 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 05 19:07:02 crc kubenswrapper[4828]: I1205 19:07:02.408147 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 05 19:07:02 crc kubenswrapper[4828]: I1205 19:07:02.481238 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd5b1db3-f574-4ff6-9160-f7daf0564b25-kubelet-dir\") pod \"installer-9-crc\" (UID: \"dd5b1db3-f574-4ff6-9160-f7daf0564b25\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 19:07:02 crc kubenswrapper[4828]: I1205 19:07:02.481398 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dd5b1db3-f574-4ff6-9160-f7daf0564b25-var-lock\") pod \"installer-9-crc\" (UID: \"dd5b1db3-f574-4ff6-9160-f7daf0564b25\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 19:07:02 crc kubenswrapper[4828]: I1205 19:07:02.481464 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd5b1db3-f574-4ff6-9160-f7daf0564b25-kube-api-access\") pod \"installer-9-crc\" (UID: \"dd5b1db3-f574-4ff6-9160-f7daf0564b25\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 19:07:02 crc kubenswrapper[4828]: I1205 19:07:02.581921 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dd5b1db3-f574-4ff6-9160-f7daf0564b25-var-lock\") pod \"installer-9-crc\" (UID: \"dd5b1db3-f574-4ff6-9160-f7daf0564b25\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 19:07:02 crc kubenswrapper[4828]: I1205 19:07:02.581985 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd5b1db3-f574-4ff6-9160-f7daf0564b25-kube-api-access\") pod \"installer-9-crc\" (UID: \"dd5b1db3-f574-4ff6-9160-f7daf0564b25\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 19:07:02 crc kubenswrapper[4828]: I1205 19:07:02.582038 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd5b1db3-f574-4ff6-9160-f7daf0564b25-kubelet-dir\") pod \"installer-9-crc\" (UID: \"dd5b1db3-f574-4ff6-9160-f7daf0564b25\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 19:07:02 crc kubenswrapper[4828]: I1205 19:07:02.582094 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dd5b1db3-f574-4ff6-9160-f7daf0564b25-var-lock\") pod \"installer-9-crc\" (UID: \"dd5b1db3-f574-4ff6-9160-f7daf0564b25\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 19:07:02 crc kubenswrapper[4828]: I1205 19:07:02.582147 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd5b1db3-f574-4ff6-9160-f7daf0564b25-kubelet-dir\") pod \"installer-9-crc\" (UID: \"dd5b1db3-f574-4ff6-9160-f7daf0564b25\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 19:07:02 crc kubenswrapper[4828]: I1205 19:07:02.599274 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd5b1db3-f574-4ff6-9160-f7daf0564b25-kube-api-access\") pod \"installer-9-crc\" (UID: \"dd5b1db3-f574-4ff6-9160-f7daf0564b25\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 19:07:02 crc kubenswrapper[4828]: I1205 19:07:02.734704 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 05 19:07:04 crc kubenswrapper[4828]: E1205 19:07:04.220381 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l5xtb" podUID="2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef" Dec 05 19:07:04 crc kubenswrapper[4828]: E1205 19:07:04.220951 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-k624z" podUID="16299781-5338-4577-8a9a-2ec82c3b25b8" Dec 05 19:07:04 crc kubenswrapper[4828]: E1205 19:07:04.289458 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 05 19:07:04 crc kubenswrapper[4828]: E1205 19:07:04.289625 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-554w6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-gcmtf_openshift-marketplace(3748abcf-49c3-4f2f-a663-3256367817b0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 19:07:04 crc kubenswrapper[4828]: E1205 19:07:04.290729 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-gcmtf" podUID="3748abcf-49c3-4f2f-a663-3256367817b0" Dec 05 19:07:04 crc kubenswrapper[4828]: E1205 19:07:04.385147 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 05 19:07:04 crc kubenswrapper[4828]: E1205 19:07:04.385574 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gbc2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-9zlnj_openshift-marketplace(ee23b6fa-d318-49d6-91fb-1dacad01ad5f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 19:07:04 crc kubenswrapper[4828]: E1205 19:07:04.386861 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-9zlnj" podUID="ee23b6fa-d318-49d6-91fb-1dacad01ad5f" Dec 05 19:07:04 crc kubenswrapper[4828]: E1205 19:07:04.427162 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 05 19:07:04 crc kubenswrapper[4828]: E1205 19:07:04.427331 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5276z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-fk7dk_openshift-marketplace(10ab3b3d-1c03-4d60-8a16-c34b4a313e7b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 19:07:04 crc kubenswrapper[4828]: E1205 19:07:04.428551 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-fk7dk" podUID="10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" Dec 05 19:07:05 crc kubenswrapper[4828]: I1205 19:07:05.259700 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:07:05 crc kubenswrapper[4828]: I1205 19:07:05.259764 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:07:05 crc kubenswrapper[4828]: I1205 19:07:05.259838 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" Dec 05 19:07:05 crc kubenswrapper[4828]: I1205 19:07:05.260410 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e"} pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 19:07:05 crc kubenswrapper[4828]: I1205 19:07:05.260526 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" containerID="cri-o://01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e" gracePeriod=600 Dec 05 19:07:05 crc kubenswrapper[4828]: I1205 19:07:05.739286 4828 generic.go:334] "Generic (PLEG): container finished" podID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerID="01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e" exitCode=0 Dec 05 19:07:05 crc kubenswrapper[4828]: I1205 19:07:05.739535 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerDied","Data":"01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e"} Dec 05 19:07:07 crc kubenswrapper[4828]: E1205 19:07:07.376178 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-gcmtf" podUID="3748abcf-49c3-4f2f-a663-3256367817b0" Dec 05 19:07:07 crc kubenswrapper[4828]: E1205 19:07:07.376538 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-9zlnj" podUID="ee23b6fa-d318-49d6-91fb-1dacad01ad5f" Dec 05 19:07:07 crc kubenswrapper[4828]: E1205 19:07:07.376613 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-fk7dk" podUID="10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" Dec 05 19:07:07 crc kubenswrapper[4828]: E1205 19:07:07.454798 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 05 19:07:07 crc kubenswrapper[4828]: E1205 19:07:07.455260 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wzvkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-v2zjd_openshift-marketplace(a8f1d24b-86b2-4b82-a397-85027c6090f0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 19:07:07 crc kubenswrapper[4828]: E1205 19:07:07.458402 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-v2zjd" podUID="a8f1d24b-86b2-4b82-a397-85027c6090f0" Dec 05 19:07:07 crc kubenswrapper[4828]: E1205 19:07:07.523372 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 05 19:07:07 crc kubenswrapper[4828]: E1205 19:07:07.524067 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kr5fk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-ggj5p_openshift-marketplace(20686f13-e212-4a7c-8a64-8f8674311fff): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 19:07:07 crc kubenswrapper[4828]: E1205 19:07:07.525748 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-ggj5p" podUID="20686f13-e212-4a7c-8a64-8f8674311fff" Dec 05 19:07:07 crc kubenswrapper[4828]: I1205 19:07:07.673480 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 05 19:07:07 crc kubenswrapper[4828]: I1205 19:07:07.758201 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerStarted","Data":"d8de933ba36cba5665f56451f60fe62908c403f5937616de58a6e4ebbe2c5830"} Dec 05 19:07:07 crc kubenswrapper[4828]: I1205 19:07:07.761086 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bvf6n" event={"ID":"0595333b-a181-4a2b-90b8-e2accf80e78e","Type":"ContainerStarted","Data":"3c5b30f0fbe12ca66b5d25de11b6458febf58156b8286e5bfcad125449b4d02a"} Dec 05 19:07:07 crc kubenswrapper[4828]: I1205 19:07:07.762619 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3ec13a6b-1e24-4cce-9757-2a78f16ca6bf","Type":"ContainerStarted","Data":"e83522c9602d1c6066738e45681f7afec1015a8e538f2760358b1027a7f8dc96"} Dec 05 19:07:07 crc kubenswrapper[4828]: E1205 19:07:07.763482 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-v2zjd" podUID="a8f1d24b-86b2-4b82-a397-85027c6090f0" Dec 05 19:07:07 crc kubenswrapper[4828]: E1205 19:07:07.763586 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-ggj5p" podUID="20686f13-e212-4a7c-8a64-8f8674311fff" Dec 05 19:07:07 crc kubenswrapper[4828]: I1205 19:07:07.807941 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-bvf6n" podStartSLOduration=186.807919814 podStartE2EDuration="3m6.807919814s" podCreationTimestamp="2025-12-05 19:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:07:07.805373774 +0000 UTC m=+205.700596080" watchObservedRunningTime="2025-12-05 19:07:07.807919814 +0000 UTC m=+205.703142130" Dec 05 19:07:07 crc kubenswrapper[4828]: I1205 19:07:07.818440 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 05 19:07:07 crc kubenswrapper[4828]: W1205 19:07:07.825422 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poddd5b1db3_f574_4ff6_9160_f7daf0564b25.slice/crio-8280d3c037271a01da908c285773cca1d182e6818d49e078fe408155b3a5d4b5 WatchSource:0}: Error finding container 8280d3c037271a01da908c285773cca1d182e6818d49e078fe408155b3a5d4b5: Status 404 returned error can't find the container with id 8280d3c037271a01da908c285773cca1d182e6818d49e078fe408155b3a5d4b5 Dec 05 19:07:08 crc kubenswrapper[4828]: I1205 19:07:08.769762 4828 generic.go:334] "Generic (PLEG): container finished" podID="3ec13a6b-1e24-4cce-9757-2a78f16ca6bf" containerID="a2f5322ebbf982b61f396228cea503ac1b9f0bbf4ac4b0532ffaa28adc52b136" exitCode=0 Dec 05 19:07:08 crc kubenswrapper[4828]: I1205 19:07:08.770018 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3ec13a6b-1e24-4cce-9757-2a78f16ca6bf","Type":"ContainerDied","Data":"a2f5322ebbf982b61f396228cea503ac1b9f0bbf4ac4b0532ffaa28adc52b136"} Dec 05 19:07:08 crc kubenswrapper[4828]: I1205 19:07:08.771782 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dd5b1db3-f574-4ff6-9160-f7daf0564b25","Type":"ContainerStarted","Data":"1828efc73ccc249a8cee67af51bd91f0780e88cc9127586f3daf2443b02be907"} Dec 05 19:07:08 crc kubenswrapper[4828]: I1205 19:07:08.772048 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dd5b1db3-f574-4ff6-9160-f7daf0564b25","Type":"ContainerStarted","Data":"8280d3c037271a01da908c285773cca1d182e6818d49e078fe408155b3a5d4b5"} Dec 05 19:07:10 crc kubenswrapper[4828]: I1205 19:07:09.999640 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 19:07:10 crc kubenswrapper[4828]: I1205 19:07:10.019243 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=8.019222069 podStartE2EDuration="8.019222069s" podCreationTimestamp="2025-12-05 19:07:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:07:08.805899689 +0000 UTC m=+206.701122015" watchObservedRunningTime="2025-12-05 19:07:10.019222069 +0000 UTC m=+207.914444375" Dec 05 19:07:10 crc kubenswrapper[4828]: I1205 19:07:10.081872 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ec13a6b-1e24-4cce-9757-2a78f16ca6bf-kubelet-dir\") pod \"3ec13a6b-1e24-4cce-9757-2a78f16ca6bf\" (UID: \"3ec13a6b-1e24-4cce-9757-2a78f16ca6bf\") " Dec 05 19:07:10 crc kubenswrapper[4828]: I1205 19:07:10.082184 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ec13a6b-1e24-4cce-9757-2a78f16ca6bf-kube-api-access\") pod \"3ec13a6b-1e24-4cce-9757-2a78f16ca6bf\" (UID: \"3ec13a6b-1e24-4cce-9757-2a78f16ca6bf\") " Dec 05 19:07:10 crc kubenswrapper[4828]: I1205 19:07:10.081992 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ec13a6b-1e24-4cce-9757-2a78f16ca6bf-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3ec13a6b-1e24-4cce-9757-2a78f16ca6bf" (UID: "3ec13a6b-1e24-4cce-9757-2a78f16ca6bf"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:07:10 crc kubenswrapper[4828]: I1205 19:07:10.082417 4828 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ec13a6b-1e24-4cce-9757-2a78f16ca6bf-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:10 crc kubenswrapper[4828]: I1205 19:07:10.087670 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ec13a6b-1e24-4cce-9757-2a78f16ca6bf-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3ec13a6b-1e24-4cce-9757-2a78f16ca6bf" (UID: "3ec13a6b-1e24-4cce-9757-2a78f16ca6bf"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:07:10 crc kubenswrapper[4828]: I1205 19:07:10.183233 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ec13a6b-1e24-4cce-9757-2a78f16ca6bf-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:10 crc kubenswrapper[4828]: I1205 19:07:10.782394 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3ec13a6b-1e24-4cce-9757-2a78f16ca6bf","Type":"ContainerDied","Data":"e83522c9602d1c6066738e45681f7afec1015a8e538f2760358b1027a7f8dc96"} Dec 05 19:07:10 crc kubenswrapper[4828]: I1205 19:07:10.782432 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e83522c9602d1c6066738e45681f7afec1015a8e538f2760358b1027a7f8dc96" Dec 05 19:07:10 crc kubenswrapper[4828]: I1205 19:07:10.782457 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 19:07:13 crc kubenswrapper[4828]: I1205 19:07:13.799964 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggntr" event={"ID":"947f583f-02d4-4dde-9dd3-3991bad7d22e","Type":"ContainerStarted","Data":"d5feee727b7af315e0d17d07eaa7a75c2d77e4f29dc0ce927f6efbead449edbe"} Dec 05 19:07:14 crc kubenswrapper[4828]: I1205 19:07:14.806836 4828 generic.go:334] "Generic (PLEG): container finished" podID="947f583f-02d4-4dde-9dd3-3991bad7d22e" containerID="d5feee727b7af315e0d17d07eaa7a75c2d77e4f29dc0ce927f6efbead449edbe" exitCode=0 Dec 05 19:07:14 crc kubenswrapper[4828]: I1205 19:07:14.806856 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggntr" event={"ID":"947f583f-02d4-4dde-9dd3-3991bad7d22e","Type":"ContainerDied","Data":"d5feee727b7af315e0d17d07eaa7a75c2d77e4f29dc0ce927f6efbead449edbe"} Dec 05 19:07:16 crc kubenswrapper[4828]: I1205 19:07:16.825779 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggntr" event={"ID":"947f583f-02d4-4dde-9dd3-3991bad7d22e","Type":"ContainerStarted","Data":"dfdb49b110d957cc4749c5d267074f6774ad6d4e6e5482bf6bd44782a5867fb7"} Dec 05 19:07:17 crc kubenswrapper[4828]: I1205 19:07:17.472189 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ggntr" podStartSLOduration=4.349763257 podStartE2EDuration="1m2.472171725s" podCreationTimestamp="2025-12-05 19:06:15 +0000 UTC" firstStartedPulling="2025-12-05 19:06:18.224194703 +0000 UTC m=+156.119417009" lastFinishedPulling="2025-12-05 19:07:16.346603171 +0000 UTC m=+214.241825477" observedRunningTime="2025-12-05 19:07:16.847495636 +0000 UTC m=+214.742717942" watchObservedRunningTime="2025-12-05 19:07:17.472171725 +0000 UTC m=+215.367394031" Dec 05 19:07:17 crc kubenswrapper[4828]: I1205 19:07:17.832026 4828 generic.go:334] "Generic (PLEG): container finished" podID="16299781-5338-4577-8a9a-2ec82c3b25b8" containerID="3bedbcd92585427088622036796cf54c94c45f2120a7849a790880170b257cad" exitCode=0 Dec 05 19:07:17 crc kubenswrapper[4828]: I1205 19:07:17.832092 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k624z" event={"ID":"16299781-5338-4577-8a9a-2ec82c3b25b8","Type":"ContainerDied","Data":"3bedbcd92585427088622036796cf54c94c45f2120a7849a790880170b257cad"} Dec 05 19:07:18 crc kubenswrapper[4828]: I1205 19:07:18.837509 4828 generic.go:334] "Generic (PLEG): container finished" podID="2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef" containerID="496eb5a613a3af2d823948e5a6a431c4412a48199aa41a92364d8e356680c1c6" exitCode=0 Dec 05 19:07:18 crc kubenswrapper[4828]: I1205 19:07:18.837553 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5xtb" event={"ID":"2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef","Type":"ContainerDied","Data":"496eb5a613a3af2d823948e5a6a431c4412a48199aa41a92364d8e356680c1c6"} Dec 05 19:07:19 crc kubenswrapper[4828]: I1205 19:07:19.845192 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k624z" event={"ID":"16299781-5338-4577-8a9a-2ec82c3b25b8","Type":"ContainerStarted","Data":"45b09ef07805ccf43451b458b681fad92428b2328c6af786da113af11e8931cb"} Dec 05 19:07:19 crc kubenswrapper[4828]: I1205 19:07:19.848436 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5xtb" event={"ID":"2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef","Type":"ContainerStarted","Data":"c152bceb49b0bb6d14cc008eeb74593807d909e3e26b7eabca34393bc1fe629e"} Dec 05 19:07:19 crc kubenswrapper[4828]: I1205 19:07:19.863451 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k624z" podStartSLOduration=5.231304867 podStartE2EDuration="1m5.863429018s" podCreationTimestamp="2025-12-05 19:06:14 +0000 UTC" firstStartedPulling="2025-12-05 19:06:18.149012477 +0000 UTC m=+156.044234783" lastFinishedPulling="2025-12-05 19:07:18.781136628 +0000 UTC m=+216.676358934" observedRunningTime="2025-12-05 19:07:19.861166697 +0000 UTC m=+217.756389013" watchObservedRunningTime="2025-12-05 19:07:19.863429018 +0000 UTC m=+217.758651324" Dec 05 19:07:20 crc kubenswrapper[4828]: I1205 19:07:20.467040 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l5xtb" podStartSLOduration=5.406746042 podStartE2EDuration="1m6.467018955s" podCreationTimestamp="2025-12-05 19:06:14 +0000 UTC" firstStartedPulling="2025-12-05 19:06:18.157988321 +0000 UTC m=+156.053210627" lastFinishedPulling="2025-12-05 19:07:19.218261234 +0000 UTC m=+217.113483540" observedRunningTime="2025-12-05 19:07:19.883213214 +0000 UTC m=+217.778435520" watchObservedRunningTime="2025-12-05 19:07:20.467018955 +0000 UTC m=+218.362241261" Dec 05 19:07:20 crc kubenswrapper[4828]: I1205 19:07:20.857057 4828 generic.go:334] "Generic (PLEG): container finished" podID="3748abcf-49c3-4f2f-a663-3256367817b0" containerID="04f8c893167e0e9daa4380ba80cf2a5ffb4c8042498b4e7f323d6476f24c8ee6" exitCode=0 Dec 05 19:07:20 crc kubenswrapper[4828]: I1205 19:07:20.857112 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gcmtf" event={"ID":"3748abcf-49c3-4f2f-a663-3256367817b0","Type":"ContainerDied","Data":"04f8c893167e0e9daa4380ba80cf2a5ffb4c8042498b4e7f323d6476f24c8ee6"} Dec 05 19:07:20 crc kubenswrapper[4828]: I1205 19:07:20.860094 4828 generic.go:334] "Generic (PLEG): container finished" podID="20686f13-e212-4a7c-8a64-8f8674311fff" containerID="8239fdcc04d165852b6cd46ff8149894478109656f4192b8bf57240c3d8f21b9" exitCode=0 Dec 05 19:07:20 crc kubenswrapper[4828]: I1205 19:07:20.860129 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggj5p" event={"ID":"20686f13-e212-4a7c-8a64-8f8674311fff","Type":"ContainerDied","Data":"8239fdcc04d165852b6cd46ff8149894478109656f4192b8bf57240c3d8f21b9"} Dec 05 19:07:21 crc kubenswrapper[4828]: I1205 19:07:21.866765 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gcmtf" event={"ID":"3748abcf-49c3-4f2f-a663-3256367817b0","Type":"ContainerStarted","Data":"1f3fd4d0ab4d7fb4a1f142e98b66b497701bc5cf1fbba35b2c2d6a3a60801115"} Dec 05 19:07:21 crc kubenswrapper[4828]: I1205 19:07:21.869277 4828 generic.go:334] "Generic (PLEG): container finished" podID="ee23b6fa-d318-49d6-91fb-1dacad01ad5f" containerID="be0a26dfda023570f6ab6bce2765fba396d263cc1f5f64ad1f0e01d76abb4e08" exitCode=0 Dec 05 19:07:21 crc kubenswrapper[4828]: I1205 19:07:21.869322 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zlnj" event={"ID":"ee23b6fa-d318-49d6-91fb-1dacad01ad5f","Type":"ContainerDied","Data":"be0a26dfda023570f6ab6bce2765fba396d263cc1f5f64ad1f0e01d76abb4e08"} Dec 05 19:07:21 crc kubenswrapper[4828]: I1205 19:07:21.872755 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggj5p" event={"ID":"20686f13-e212-4a7c-8a64-8f8674311fff","Type":"ContainerStarted","Data":"66545eae14d3fae2372b56659b9ad84a5367f75346d4d75a6af102b86554dcbb"} Dec 05 19:07:21 crc kubenswrapper[4828]: I1205 19:07:21.890937 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gcmtf" podStartSLOduration=2.914023525 podStartE2EDuration="1m4.890923003s" podCreationTimestamp="2025-12-05 19:06:17 +0000 UTC" firstStartedPulling="2025-12-05 19:06:19.31087867 +0000 UTC m=+157.206100976" lastFinishedPulling="2025-12-05 19:07:21.287778148 +0000 UTC m=+219.183000454" observedRunningTime="2025-12-05 19:07:21.886804312 +0000 UTC m=+219.782026618" watchObservedRunningTime="2025-12-05 19:07:21.890923003 +0000 UTC m=+219.786145309" Dec 05 19:07:21 crc kubenswrapper[4828]: I1205 19:07:21.950089 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ggj5p" podStartSLOduration=3.040066905 podStartE2EDuration="1m3.950064946s" podCreationTimestamp="2025-12-05 19:06:18 +0000 UTC" firstStartedPulling="2025-12-05 19:06:20.316195088 +0000 UTC m=+158.211417394" lastFinishedPulling="2025-12-05 19:07:21.226193129 +0000 UTC m=+219.121415435" observedRunningTime="2025-12-05 19:07:21.949717457 +0000 UTC m=+219.844939783" watchObservedRunningTime="2025-12-05 19:07:21.950064946 +0000 UTC m=+219.845287262" Dec 05 19:07:23 crc kubenswrapper[4828]: I1205 19:07:23.887717 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zlnj" event={"ID":"ee23b6fa-d318-49d6-91fb-1dacad01ad5f","Type":"ContainerStarted","Data":"5e4dc0a310c58d6631428726c53070cfe16bf2b5ddd36c37cde902c6d2beebfe"} Dec 05 19:07:23 crc kubenswrapper[4828]: I1205 19:07:23.890543 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fk7dk" event={"ID":"10ab3b3d-1c03-4d60-8a16-c34b4a313e7b","Type":"ContainerStarted","Data":"c9ba57c711eedd2ba39884bfa6be2454861a52c77ec8a677396abf8b55c7e0ab"} Dec 05 19:07:23 crc kubenswrapper[4828]: I1205 19:07:23.905507 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9zlnj" podStartSLOduration=3.142649869 podStartE2EDuration="1m7.905490398s" podCreationTimestamp="2025-12-05 19:06:16 +0000 UTC" firstStartedPulling="2025-12-05 19:06:18.174862642 +0000 UTC m=+156.070084948" lastFinishedPulling="2025-12-05 19:07:22.937703171 +0000 UTC m=+220.832925477" observedRunningTime="2025-12-05 19:07:23.904516112 +0000 UTC m=+221.799738418" watchObservedRunningTime="2025-12-05 19:07:23.905490398 +0000 UTC m=+221.800712704" Dec 05 19:07:24 crc kubenswrapper[4828]: I1205 19:07:24.898667 4828 generic.go:334] "Generic (PLEG): container finished" podID="10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" containerID="c9ba57c711eedd2ba39884bfa6be2454861a52c77ec8a677396abf8b55c7e0ab" exitCode=0 Dec 05 19:07:24 crc kubenswrapper[4828]: I1205 19:07:24.898761 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fk7dk" event={"ID":"10ab3b3d-1c03-4d60-8a16-c34b4a313e7b","Type":"ContainerDied","Data":"c9ba57c711eedd2ba39884bfa6be2454861a52c77ec8a677396abf8b55c7e0ab"} Dec 05 19:07:25 crc kubenswrapper[4828]: I1205 19:07:25.286694 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k624z" Dec 05 19:07:25 crc kubenswrapper[4828]: I1205 19:07:25.286744 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k624z" Dec 05 19:07:25 crc kubenswrapper[4828]: I1205 19:07:25.353274 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l5xtb" Dec 05 19:07:25 crc kubenswrapper[4828]: I1205 19:07:25.353342 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l5xtb" Dec 05 19:07:25 crc kubenswrapper[4828]: I1205 19:07:25.406673 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k624z" Dec 05 19:07:25 crc kubenswrapper[4828]: I1205 19:07:25.407095 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l5xtb" Dec 05 19:07:25 crc kubenswrapper[4828]: I1205 19:07:25.534022 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ggntr" Dec 05 19:07:25 crc kubenswrapper[4828]: I1205 19:07:25.534307 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ggntr" Dec 05 19:07:25 crc kubenswrapper[4828]: I1205 19:07:25.570166 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ggntr" Dec 05 19:07:25 crc kubenswrapper[4828]: I1205 19:07:25.936010 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ggntr" Dec 05 19:07:25 crc kubenswrapper[4828]: I1205 19:07:25.948691 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k624z" Dec 05 19:07:25 crc kubenswrapper[4828]: I1205 19:07:25.950254 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l5xtb" Dec 05 19:07:27 crc kubenswrapper[4828]: I1205 19:07:27.238879 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9zlnj" Dec 05 19:07:27 crc kubenswrapper[4828]: I1205 19:07:27.238941 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9zlnj" Dec 05 19:07:27 crc kubenswrapper[4828]: I1205 19:07:27.293896 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9zlnj" Dec 05 19:07:27 crc kubenswrapper[4828]: I1205 19:07:27.659337 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gcmtf" Dec 05 19:07:27 crc kubenswrapper[4828]: I1205 19:07:27.659530 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gcmtf" Dec 05 19:07:27 crc kubenswrapper[4828]: I1205 19:07:27.704105 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gcmtf" Dec 05 19:07:27 crc kubenswrapper[4828]: I1205 19:07:27.959729 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gcmtf" Dec 05 19:07:28 crc kubenswrapper[4828]: I1205 19:07:28.789196 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ggj5p" Dec 05 19:07:28 crc kubenswrapper[4828]: I1205 19:07:28.789522 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ggj5p" Dec 05 19:07:28 crc kubenswrapper[4828]: I1205 19:07:28.834740 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b6nk4"] Dec 05 19:07:28 crc kubenswrapper[4828]: I1205 19:07:28.856957 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ggj5p" Dec 05 19:07:28 crc kubenswrapper[4828]: I1205 19:07:28.969040 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ggj5p" Dec 05 19:07:29 crc kubenswrapper[4828]: I1205 19:07:29.632562 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ggntr"] Dec 05 19:07:29 crc kubenswrapper[4828]: I1205 19:07:29.632837 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ggntr" podUID="947f583f-02d4-4dde-9dd3-3991bad7d22e" containerName="registry-server" containerID="cri-o://dfdb49b110d957cc4749c5d267074f6774ad6d4e6e5482bf6bd44782a5867fb7" gracePeriod=2 Dec 05 19:07:30 crc kubenswrapper[4828]: I1205 19:07:30.631269 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gcmtf"] Dec 05 19:07:30 crc kubenswrapper[4828]: I1205 19:07:30.929090 4828 generic.go:334] "Generic (PLEG): container finished" podID="947f583f-02d4-4dde-9dd3-3991bad7d22e" containerID="dfdb49b110d957cc4749c5d267074f6774ad6d4e6e5482bf6bd44782a5867fb7" exitCode=0 Dec 05 19:07:30 crc kubenswrapper[4828]: I1205 19:07:30.929209 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggntr" event={"ID":"947f583f-02d4-4dde-9dd3-3991bad7d22e","Type":"ContainerDied","Data":"dfdb49b110d957cc4749c5d267074f6774ad6d4e6e5482bf6bd44782a5867fb7"} Dec 05 19:07:30 crc kubenswrapper[4828]: I1205 19:07:30.929341 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gcmtf" podUID="3748abcf-49c3-4f2f-a663-3256367817b0" containerName="registry-server" containerID="cri-o://1f3fd4d0ab4d7fb4a1f142e98b66b497701bc5cf1fbba35b2c2d6a3a60801115" gracePeriod=2 Dec 05 19:07:31 crc kubenswrapper[4828]: I1205 19:07:31.713061 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ggntr" Dec 05 19:07:31 crc kubenswrapper[4828]: I1205 19:07:31.853333 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvmz9\" (UniqueName: \"kubernetes.io/projected/947f583f-02d4-4dde-9dd3-3991bad7d22e-kube-api-access-dvmz9\") pod \"947f583f-02d4-4dde-9dd3-3991bad7d22e\" (UID: \"947f583f-02d4-4dde-9dd3-3991bad7d22e\") " Dec 05 19:07:31 crc kubenswrapper[4828]: I1205 19:07:31.853420 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/947f583f-02d4-4dde-9dd3-3991bad7d22e-utilities\") pod \"947f583f-02d4-4dde-9dd3-3991bad7d22e\" (UID: \"947f583f-02d4-4dde-9dd3-3991bad7d22e\") " Dec 05 19:07:31 crc kubenswrapper[4828]: I1205 19:07:31.853456 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/947f583f-02d4-4dde-9dd3-3991bad7d22e-catalog-content\") pod \"947f583f-02d4-4dde-9dd3-3991bad7d22e\" (UID: \"947f583f-02d4-4dde-9dd3-3991bad7d22e\") " Dec 05 19:07:31 crc kubenswrapper[4828]: I1205 19:07:31.854701 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/947f583f-02d4-4dde-9dd3-3991bad7d22e-utilities" (OuterVolumeSpecName: "utilities") pod "947f583f-02d4-4dde-9dd3-3991bad7d22e" (UID: "947f583f-02d4-4dde-9dd3-3991bad7d22e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:07:31 crc kubenswrapper[4828]: I1205 19:07:31.859362 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/947f583f-02d4-4dde-9dd3-3991bad7d22e-kube-api-access-dvmz9" (OuterVolumeSpecName: "kube-api-access-dvmz9") pod "947f583f-02d4-4dde-9dd3-3991bad7d22e" (UID: "947f583f-02d4-4dde-9dd3-3991bad7d22e"). InnerVolumeSpecName "kube-api-access-dvmz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:07:31 crc kubenswrapper[4828]: I1205 19:07:31.903473 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/947f583f-02d4-4dde-9dd3-3991bad7d22e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "947f583f-02d4-4dde-9dd3-3991bad7d22e" (UID: "947f583f-02d4-4dde-9dd3-3991bad7d22e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:07:31 crc kubenswrapper[4828]: I1205 19:07:31.936417 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggntr" event={"ID":"947f583f-02d4-4dde-9dd3-3991bad7d22e","Type":"ContainerDied","Data":"cc54fdc14961bb0297b612538f89f4559f386f493e1fe2088f570eb55db4ffb7"} Dec 05 19:07:31 crc kubenswrapper[4828]: I1205 19:07:31.936521 4828 scope.go:117] "RemoveContainer" containerID="dfdb49b110d957cc4749c5d267074f6774ad6d4e6e5482bf6bd44782a5867fb7" Dec 05 19:07:31 crc kubenswrapper[4828]: I1205 19:07:31.936534 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ggntr" Dec 05 19:07:31 crc kubenswrapper[4828]: I1205 19:07:31.954756 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvmz9\" (UniqueName: \"kubernetes.io/projected/947f583f-02d4-4dde-9dd3-3991bad7d22e-kube-api-access-dvmz9\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:31 crc kubenswrapper[4828]: I1205 19:07:31.954799 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/947f583f-02d4-4dde-9dd3-3991bad7d22e-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:31 crc kubenswrapper[4828]: I1205 19:07:31.954812 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/947f583f-02d4-4dde-9dd3-3991bad7d22e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:31 crc kubenswrapper[4828]: I1205 19:07:31.968199 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ggntr"] Dec 05 19:07:31 crc kubenswrapper[4828]: I1205 19:07:31.971142 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ggntr"] Dec 05 19:07:32 crc kubenswrapper[4828]: I1205 19:07:32.373746 4828 scope.go:117] "RemoveContainer" containerID="d5feee727b7af315e0d17d07eaa7a75c2d77e4f29dc0ce927f6efbead449edbe" Dec 05 19:07:32 crc kubenswrapper[4828]: I1205 19:07:32.394249 4828 scope.go:117] "RemoveContainer" containerID="9690cfe6489ea1da71b12646ce1044a24b43116781d8adc2df5b36f34cafc74c" Dec 05 19:07:32 crc kubenswrapper[4828]: I1205 19:07:32.460770 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="947f583f-02d4-4dde-9dd3-3991bad7d22e" path="/var/lib/kubelet/pods/947f583f-02d4-4dde-9dd3-3991bad7d22e/volumes" Dec 05 19:07:32 crc kubenswrapper[4828]: I1205 19:07:32.786039 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gcmtf" Dec 05 19:07:32 crc kubenswrapper[4828]: I1205 19:07:32.944068 4828 generic.go:334] "Generic (PLEG): container finished" podID="3748abcf-49c3-4f2f-a663-3256367817b0" containerID="1f3fd4d0ab4d7fb4a1f142e98b66b497701bc5cf1fbba35b2c2d6a3a60801115" exitCode=0 Dec 05 19:07:32 crc kubenswrapper[4828]: I1205 19:07:32.944120 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gcmtf" Dec 05 19:07:32 crc kubenswrapper[4828]: I1205 19:07:32.944199 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gcmtf" event={"ID":"3748abcf-49c3-4f2f-a663-3256367817b0","Type":"ContainerDied","Data":"1f3fd4d0ab4d7fb4a1f142e98b66b497701bc5cf1fbba35b2c2d6a3a60801115"} Dec 05 19:07:32 crc kubenswrapper[4828]: I1205 19:07:32.944239 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gcmtf" event={"ID":"3748abcf-49c3-4f2f-a663-3256367817b0","Type":"ContainerDied","Data":"bddf82d233c4aedf3620f5d1fcd854c2cda4d48c89c3017f47d86f631450a7b9"} Dec 05 19:07:32 crc kubenswrapper[4828]: I1205 19:07:32.944267 4828 scope.go:117] "RemoveContainer" containerID="1f3fd4d0ab4d7fb4a1f142e98b66b497701bc5cf1fbba35b2c2d6a3a60801115" Dec 05 19:07:32 crc kubenswrapper[4828]: I1205 19:07:32.965574 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3748abcf-49c3-4f2f-a663-3256367817b0-utilities\") pod \"3748abcf-49c3-4f2f-a663-3256367817b0\" (UID: \"3748abcf-49c3-4f2f-a663-3256367817b0\") " Dec 05 19:07:32 crc kubenswrapper[4828]: I1205 19:07:32.965639 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-554w6\" (UniqueName: \"kubernetes.io/projected/3748abcf-49c3-4f2f-a663-3256367817b0-kube-api-access-554w6\") pod \"3748abcf-49c3-4f2f-a663-3256367817b0\" (UID: \"3748abcf-49c3-4f2f-a663-3256367817b0\") " Dec 05 19:07:32 crc kubenswrapper[4828]: I1205 19:07:32.965680 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3748abcf-49c3-4f2f-a663-3256367817b0-catalog-content\") pod \"3748abcf-49c3-4f2f-a663-3256367817b0\" (UID: \"3748abcf-49c3-4f2f-a663-3256367817b0\") " Dec 05 19:07:32 crc kubenswrapper[4828]: I1205 19:07:32.966869 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3748abcf-49c3-4f2f-a663-3256367817b0-utilities" (OuterVolumeSpecName: "utilities") pod "3748abcf-49c3-4f2f-a663-3256367817b0" (UID: "3748abcf-49c3-4f2f-a663-3256367817b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:07:32 crc kubenswrapper[4828]: I1205 19:07:32.971671 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3748abcf-49c3-4f2f-a663-3256367817b0-kube-api-access-554w6" (OuterVolumeSpecName: "kube-api-access-554w6") pod "3748abcf-49c3-4f2f-a663-3256367817b0" (UID: "3748abcf-49c3-4f2f-a663-3256367817b0"). InnerVolumeSpecName "kube-api-access-554w6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:07:33 crc kubenswrapper[4828]: I1205 19:07:33.033082 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ggj5p"] Dec 05 19:07:33 crc kubenswrapper[4828]: I1205 19:07:33.033368 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ggj5p" podUID="20686f13-e212-4a7c-8a64-8f8674311fff" containerName="registry-server" containerID="cri-o://66545eae14d3fae2372b56659b9ad84a5367f75346d4d75a6af102b86554dcbb" gracePeriod=2 Dec 05 19:07:33 crc kubenswrapper[4828]: I1205 19:07:33.052842 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3748abcf-49c3-4f2f-a663-3256367817b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3748abcf-49c3-4f2f-a663-3256367817b0" (UID: "3748abcf-49c3-4f2f-a663-3256367817b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:07:33 crc kubenswrapper[4828]: I1205 19:07:33.067270 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3748abcf-49c3-4f2f-a663-3256367817b0-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:33 crc kubenswrapper[4828]: I1205 19:07:33.067312 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-554w6\" (UniqueName: \"kubernetes.io/projected/3748abcf-49c3-4f2f-a663-3256367817b0-kube-api-access-554w6\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:33 crc kubenswrapper[4828]: I1205 19:07:33.067324 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3748abcf-49c3-4f2f-a663-3256367817b0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:33 crc kubenswrapper[4828]: I1205 19:07:33.269705 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gcmtf"] Dec 05 19:07:33 crc kubenswrapper[4828]: I1205 19:07:33.278876 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gcmtf"] Dec 05 19:07:33 crc kubenswrapper[4828]: I1205 19:07:33.955691 4828 generic.go:334] "Generic (PLEG): container finished" podID="a8f1d24b-86b2-4b82-a397-85027c6090f0" containerID="a378b12b6712d2239882062fbf5d82613f1b1817e7b680b56d28c2de9b7e341c" exitCode=0 Dec 05 19:07:33 crc kubenswrapper[4828]: I1205 19:07:33.955734 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v2zjd" event={"ID":"a8f1d24b-86b2-4b82-a397-85027c6090f0","Type":"ContainerDied","Data":"a378b12b6712d2239882062fbf5d82613f1b1817e7b680b56d28c2de9b7e341c"} Dec 05 19:07:34 crc kubenswrapper[4828]: I1205 19:07:34.233586 4828 scope.go:117] "RemoveContainer" containerID="04f8c893167e0e9daa4380ba80cf2a5ffb4c8042498b4e7f323d6476f24c8ee6" Dec 05 19:07:34 crc kubenswrapper[4828]: I1205 19:07:34.458467 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3748abcf-49c3-4f2f-a663-3256367817b0" path="/var/lib/kubelet/pods/3748abcf-49c3-4f2f-a663-3256367817b0/volumes" Dec 05 19:07:36 crc kubenswrapper[4828]: I1205 19:07:36.515347 4828 scope.go:117] "RemoveContainer" containerID="dd8eba71c6f6c9b7fd750b3c198eb4c8313bb54468edde6fe693322b46a4efad" Dec 05 19:07:36 crc kubenswrapper[4828]: I1205 19:07:36.532700 4828 scope.go:117] "RemoveContainer" containerID="1f3fd4d0ab4d7fb4a1f142e98b66b497701bc5cf1fbba35b2c2d6a3a60801115" Dec 05 19:07:36 crc kubenswrapper[4828]: E1205 19:07:36.533624 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f3fd4d0ab4d7fb4a1f142e98b66b497701bc5cf1fbba35b2c2d6a3a60801115\": container with ID starting with 1f3fd4d0ab4d7fb4a1f142e98b66b497701bc5cf1fbba35b2c2d6a3a60801115 not found: ID does not exist" containerID="1f3fd4d0ab4d7fb4a1f142e98b66b497701bc5cf1fbba35b2c2d6a3a60801115" Dec 05 19:07:36 crc kubenswrapper[4828]: I1205 19:07:36.533670 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f3fd4d0ab4d7fb4a1f142e98b66b497701bc5cf1fbba35b2c2d6a3a60801115"} err="failed to get container status \"1f3fd4d0ab4d7fb4a1f142e98b66b497701bc5cf1fbba35b2c2d6a3a60801115\": rpc error: code = NotFound desc = could not find container \"1f3fd4d0ab4d7fb4a1f142e98b66b497701bc5cf1fbba35b2c2d6a3a60801115\": container with ID starting with 1f3fd4d0ab4d7fb4a1f142e98b66b497701bc5cf1fbba35b2c2d6a3a60801115 not found: ID does not exist" Dec 05 19:07:36 crc kubenswrapper[4828]: I1205 19:07:36.533848 4828 scope.go:117] "RemoveContainer" containerID="04f8c893167e0e9daa4380ba80cf2a5ffb4c8042498b4e7f323d6476f24c8ee6" Dec 05 19:07:36 crc kubenswrapper[4828]: E1205 19:07:36.534162 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04f8c893167e0e9daa4380ba80cf2a5ffb4c8042498b4e7f323d6476f24c8ee6\": container with ID starting with 04f8c893167e0e9daa4380ba80cf2a5ffb4c8042498b4e7f323d6476f24c8ee6 not found: ID does not exist" containerID="04f8c893167e0e9daa4380ba80cf2a5ffb4c8042498b4e7f323d6476f24c8ee6" Dec 05 19:07:36 crc kubenswrapper[4828]: I1205 19:07:36.534191 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04f8c893167e0e9daa4380ba80cf2a5ffb4c8042498b4e7f323d6476f24c8ee6"} err="failed to get container status \"04f8c893167e0e9daa4380ba80cf2a5ffb4c8042498b4e7f323d6476f24c8ee6\": rpc error: code = NotFound desc = could not find container \"04f8c893167e0e9daa4380ba80cf2a5ffb4c8042498b4e7f323d6476f24c8ee6\": container with ID starting with 04f8c893167e0e9daa4380ba80cf2a5ffb4c8042498b4e7f323d6476f24c8ee6 not found: ID does not exist" Dec 05 19:07:36 crc kubenswrapper[4828]: I1205 19:07:36.534210 4828 scope.go:117] "RemoveContainer" containerID="dd8eba71c6f6c9b7fd750b3c198eb4c8313bb54468edde6fe693322b46a4efad" Dec 05 19:07:36 crc kubenswrapper[4828]: E1205 19:07:36.534548 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd8eba71c6f6c9b7fd750b3c198eb4c8313bb54468edde6fe693322b46a4efad\": container with ID starting with dd8eba71c6f6c9b7fd750b3c198eb4c8313bb54468edde6fe693322b46a4efad not found: ID does not exist" containerID="dd8eba71c6f6c9b7fd750b3c198eb4c8313bb54468edde6fe693322b46a4efad" Dec 05 19:07:36 crc kubenswrapper[4828]: I1205 19:07:36.534605 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd8eba71c6f6c9b7fd750b3c198eb4c8313bb54468edde6fe693322b46a4efad"} err="failed to get container status \"dd8eba71c6f6c9b7fd750b3c198eb4c8313bb54468edde6fe693322b46a4efad\": rpc error: code = NotFound desc = could not find container \"dd8eba71c6f6c9b7fd750b3c198eb4c8313bb54468edde6fe693322b46a4efad\": container with ID starting with dd8eba71c6f6c9b7fd750b3c198eb4c8313bb54468edde6fe693322b46a4efad not found: ID does not exist" Dec 05 19:07:36 crc kubenswrapper[4828]: I1205 19:07:36.979262 4828 generic.go:334] "Generic (PLEG): container finished" podID="20686f13-e212-4a7c-8a64-8f8674311fff" containerID="66545eae14d3fae2372b56659b9ad84a5367f75346d4d75a6af102b86554dcbb" exitCode=0 Dec 05 19:07:36 crc kubenswrapper[4828]: I1205 19:07:36.979345 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggj5p" event={"ID":"20686f13-e212-4a7c-8a64-8f8674311fff","Type":"ContainerDied","Data":"66545eae14d3fae2372b56659b9ad84a5367f75346d4d75a6af102b86554dcbb"} Dec 05 19:07:36 crc kubenswrapper[4828]: I1205 19:07:36.983276 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fk7dk" event={"ID":"10ab3b3d-1c03-4d60-8a16-c34b4a313e7b","Type":"ContainerStarted","Data":"e679a24f8992a0a1909f21ec1bf63ee29463f59f3279f99d84fd841fa55b3484"} Dec 05 19:07:36 crc kubenswrapper[4828]: I1205 19:07:36.985478 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v2zjd" event={"ID":"a8f1d24b-86b2-4b82-a397-85027c6090f0","Type":"ContainerStarted","Data":"42ead5e6551c1bf1bcc7df4ae1e92528a1263f35664301d308a7bc7f938afaa9"} Dec 05 19:07:37 crc kubenswrapper[4828]: I1205 19:07:37.000601 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fk7dk" podStartSLOduration=3.692220786 podStartE2EDuration="1m22.000588306s" podCreationTimestamp="2025-12-05 19:06:15 +0000 UTC" firstStartedPulling="2025-12-05 19:06:18.224505581 +0000 UTC m=+156.119727887" lastFinishedPulling="2025-12-05 19:07:36.532873101 +0000 UTC m=+234.428095407" observedRunningTime="2025-12-05 19:07:36.998430697 +0000 UTC m=+234.893653003" watchObservedRunningTime="2025-12-05 19:07:37.000588306 +0000 UTC m=+234.895810612" Dec 05 19:07:37 crc kubenswrapper[4828]: I1205 19:07:37.022009 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v2zjd" podStartSLOduration=2.767964986 podStartE2EDuration="1m20.021987456s" podCreationTimestamp="2025-12-05 19:06:17 +0000 UTC" firstStartedPulling="2025-12-05 19:06:19.31127423 +0000 UTC m=+157.206496536" lastFinishedPulling="2025-12-05 19:07:36.5652967 +0000 UTC m=+234.460519006" observedRunningTime="2025-12-05 19:07:37.01847932 +0000 UTC m=+234.913701636" watchObservedRunningTime="2025-12-05 19:07:37.021987456 +0000 UTC m=+234.917209782" Dec 05 19:07:37 crc kubenswrapper[4828]: I1205 19:07:37.199072 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggj5p" Dec 05 19:07:37 crc kubenswrapper[4828]: I1205 19:07:37.277796 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9zlnj" Dec 05 19:07:37 crc kubenswrapper[4828]: I1205 19:07:37.322685 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr5fk\" (UniqueName: \"kubernetes.io/projected/20686f13-e212-4a7c-8a64-8f8674311fff-kube-api-access-kr5fk\") pod \"20686f13-e212-4a7c-8a64-8f8674311fff\" (UID: \"20686f13-e212-4a7c-8a64-8f8674311fff\") " Dec 05 19:07:37 crc kubenswrapper[4828]: I1205 19:07:37.322741 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20686f13-e212-4a7c-8a64-8f8674311fff-utilities\") pod \"20686f13-e212-4a7c-8a64-8f8674311fff\" (UID: \"20686f13-e212-4a7c-8a64-8f8674311fff\") " Dec 05 19:07:37 crc kubenswrapper[4828]: I1205 19:07:37.322818 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20686f13-e212-4a7c-8a64-8f8674311fff-catalog-content\") pod \"20686f13-e212-4a7c-8a64-8f8674311fff\" (UID: \"20686f13-e212-4a7c-8a64-8f8674311fff\") " Dec 05 19:07:37 crc kubenswrapper[4828]: I1205 19:07:37.323560 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20686f13-e212-4a7c-8a64-8f8674311fff-utilities" (OuterVolumeSpecName: "utilities") pod "20686f13-e212-4a7c-8a64-8f8674311fff" (UID: "20686f13-e212-4a7c-8a64-8f8674311fff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:07:37 crc kubenswrapper[4828]: I1205 19:07:37.327555 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20686f13-e212-4a7c-8a64-8f8674311fff-kube-api-access-kr5fk" (OuterVolumeSpecName: "kube-api-access-kr5fk") pod "20686f13-e212-4a7c-8a64-8f8674311fff" (UID: "20686f13-e212-4a7c-8a64-8f8674311fff"). InnerVolumeSpecName "kube-api-access-kr5fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:07:37 crc kubenswrapper[4828]: I1205 19:07:37.424500 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr5fk\" (UniqueName: \"kubernetes.io/projected/20686f13-e212-4a7c-8a64-8f8674311fff-kube-api-access-kr5fk\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:37 crc kubenswrapper[4828]: I1205 19:07:37.424549 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20686f13-e212-4a7c-8a64-8f8674311fff-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:37 crc kubenswrapper[4828]: I1205 19:07:37.440304 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20686f13-e212-4a7c-8a64-8f8674311fff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20686f13-e212-4a7c-8a64-8f8674311fff" (UID: "20686f13-e212-4a7c-8a64-8f8674311fff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:07:37 crc kubenswrapper[4828]: I1205 19:07:37.525543 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20686f13-e212-4a7c-8a64-8f8674311fff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:37 crc kubenswrapper[4828]: I1205 19:07:37.992762 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggj5p" Dec 05 19:07:37 crc kubenswrapper[4828]: I1205 19:07:37.992998 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggj5p" event={"ID":"20686f13-e212-4a7c-8a64-8f8674311fff","Type":"ContainerDied","Data":"c8076a7759f14b85b9e7068f9853c6b1ecd5d5fb2fb247c5fbfbfe35608dbb22"} Dec 05 19:07:37 crc kubenswrapper[4828]: I1205 19:07:37.993055 4828 scope.go:117] "RemoveContainer" containerID="66545eae14d3fae2372b56659b9ad84a5367f75346d4d75a6af102b86554dcbb" Dec 05 19:07:38 crc kubenswrapper[4828]: I1205 19:07:38.008979 4828 scope.go:117] "RemoveContainer" containerID="8239fdcc04d165852b6cd46ff8149894478109656f4192b8bf57240c3d8f21b9" Dec 05 19:07:38 crc kubenswrapper[4828]: I1205 19:07:38.030982 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ggj5p"] Dec 05 19:07:38 crc kubenswrapper[4828]: I1205 19:07:38.035656 4828 scope.go:117] "RemoveContainer" containerID="3a1d72b6f1a1eaed5b80ad80f826f5d82e33afb5f8bac08273effe5e4d052e27" Dec 05 19:07:38 crc kubenswrapper[4828]: I1205 19:07:38.036999 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ggj5p"] Dec 05 19:07:38 crc kubenswrapper[4828]: I1205 19:07:38.259698 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v2zjd" Dec 05 19:07:38 crc kubenswrapper[4828]: I1205 19:07:38.260277 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v2zjd" Dec 05 19:07:38 crc kubenswrapper[4828]: I1205 19:07:38.456425 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20686f13-e212-4a7c-8a64-8f8674311fff" path="/var/lib/kubelet/pods/20686f13-e212-4a7c-8a64-8f8674311fff/volumes" Dec 05 19:07:39 crc kubenswrapper[4828]: I1205 19:07:39.299532 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v2zjd" podUID="a8f1d24b-86b2-4b82-a397-85027c6090f0" containerName="registry-server" probeResult="failure" output=< Dec 05 19:07:39 crc kubenswrapper[4828]: timeout: failed to connect service ":50051" within 1s Dec 05 19:07:39 crc kubenswrapper[4828]: > Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.707194 4828 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 05 19:07:45 crc kubenswrapper[4828]: E1205 19:07:45.708066 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="947f583f-02d4-4dde-9dd3-3991bad7d22e" containerName="extract-utilities" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.708083 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="947f583f-02d4-4dde-9dd3-3991bad7d22e" containerName="extract-utilities" Dec 05 19:07:45 crc kubenswrapper[4828]: E1205 19:07:45.708093 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ec13a6b-1e24-4cce-9757-2a78f16ca6bf" containerName="pruner" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.708101 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ec13a6b-1e24-4cce-9757-2a78f16ca6bf" containerName="pruner" Dec 05 19:07:45 crc kubenswrapper[4828]: E1205 19:07:45.708114 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20686f13-e212-4a7c-8a64-8f8674311fff" containerName="extract-content" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.708122 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="20686f13-e212-4a7c-8a64-8f8674311fff" containerName="extract-content" Dec 05 19:07:45 crc kubenswrapper[4828]: E1205 19:07:45.708133 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3748abcf-49c3-4f2f-a663-3256367817b0" containerName="registry-server" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.708139 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="3748abcf-49c3-4f2f-a663-3256367817b0" containerName="registry-server" Dec 05 19:07:45 crc kubenswrapper[4828]: E1205 19:07:45.708149 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20686f13-e212-4a7c-8a64-8f8674311fff" containerName="registry-server" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.708156 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="20686f13-e212-4a7c-8a64-8f8674311fff" containerName="registry-server" Dec 05 19:07:45 crc kubenswrapper[4828]: E1205 19:07:45.708169 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="947f583f-02d4-4dde-9dd3-3991bad7d22e" containerName="extract-content" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.708176 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="947f583f-02d4-4dde-9dd3-3991bad7d22e" containerName="extract-content" Dec 05 19:07:45 crc kubenswrapper[4828]: E1205 19:07:45.708189 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3748abcf-49c3-4f2f-a663-3256367817b0" containerName="extract-content" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.708196 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="3748abcf-49c3-4f2f-a663-3256367817b0" containerName="extract-content" Dec 05 19:07:45 crc kubenswrapper[4828]: E1205 19:07:45.708208 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20686f13-e212-4a7c-8a64-8f8674311fff" containerName="extract-utilities" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.708216 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="20686f13-e212-4a7c-8a64-8f8674311fff" containerName="extract-utilities" Dec 05 19:07:45 crc kubenswrapper[4828]: E1205 19:07:45.708229 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="947f583f-02d4-4dde-9dd3-3991bad7d22e" containerName="registry-server" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.708236 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="947f583f-02d4-4dde-9dd3-3991bad7d22e" containerName="registry-server" Dec 05 19:07:45 crc kubenswrapper[4828]: E1205 19:07:45.708251 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3748abcf-49c3-4f2f-a663-3256367817b0" containerName="extract-utilities" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.708260 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="3748abcf-49c3-4f2f-a663-3256367817b0" containerName="extract-utilities" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.708374 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="947f583f-02d4-4dde-9dd3-3991bad7d22e" containerName="registry-server" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.708388 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="3748abcf-49c3-4f2f-a663-3256367817b0" containerName="registry-server" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.708404 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ec13a6b-1e24-4cce-9757-2a78f16ca6bf" containerName="pruner" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.708413 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="20686f13-e212-4a7c-8a64-8f8674311fff" containerName="registry-server" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.708786 4828 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.708957 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.709107 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96" gracePeriod=15 Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.709179 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38" gracePeriod=15 Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.709170 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421" gracePeriod=15 Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.709199 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b" gracePeriod=15 Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.709147 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519" gracePeriod=15 Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.710813 4828 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 05 19:07:45 crc kubenswrapper[4828]: E1205 19:07:45.711090 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.711103 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 05 19:07:45 crc kubenswrapper[4828]: E1205 19:07:45.711112 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.711119 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 05 19:07:45 crc kubenswrapper[4828]: E1205 19:07:45.711131 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.711137 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 05 19:07:45 crc kubenswrapper[4828]: E1205 19:07:45.711147 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.711153 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 05 19:07:45 crc kubenswrapper[4828]: E1205 19:07:45.711161 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.711166 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 05 19:07:45 crc kubenswrapper[4828]: E1205 19:07:45.711175 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.711181 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 05 19:07:45 crc kubenswrapper[4828]: E1205 19:07:45.711189 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.711196 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.711289 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.711300 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.711309 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.711317 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.711324 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.711331 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.720001 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fk7dk" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.720063 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fk7dk" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.749772 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.767720 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fk7dk" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.774326 4828 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.829749 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.829841 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.829888 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.829975 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.830010 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.830059 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.830089 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.830119 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.930950 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.930999 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.931027 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.931050 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.931074 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.931093 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.931120 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.931141 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.931146 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.931114 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.931204 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.931232 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.931415 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.931422 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.931445 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 19:07:45 crc kubenswrapper[4828]: I1205 19:07:45.931445 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 19:07:46 crc kubenswrapper[4828]: I1205 19:07:46.050879 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 19:07:46 crc kubenswrapper[4828]: I1205 19:07:46.093155 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fk7dk" Dec 05 19:07:46 crc kubenswrapper[4828]: I1205 19:07:46.837481 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fk7dk"] Dec 05 19:07:47 crc kubenswrapper[4828]: I1205 19:07:47.052334 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8b855c6abcdcc2d61e34a34cd6b06b8a3ae0de5a813767174d745a812d8771cd"} Dec 05 19:07:47 crc kubenswrapper[4828]: E1205 19:07:47.783976 4828 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.98:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187e67492264e0f0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Created,Message:Created container startup-monitor,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-05 19:07:47.783123184 +0000 UTC m=+245.678345490,LastTimestamp:2025-12-05 19:07:47.783123184 +0000 UTC m=+245.678345490,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.061505 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"4d7d8dcc0c484850a2f06711a5cebe6eba546e924d7e8d837274c471981e284b"} Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.062322 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.064467 4828 generic.go:334] "Generic (PLEG): container finished" podID="dd5b1db3-f574-4ff6-9160-f7daf0564b25" containerID="1828efc73ccc249a8cee67af51bd91f0780e88cc9127586f3daf2443b02be907" exitCode=0 Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.064536 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dd5b1db3-f574-4ff6-9160-f7daf0564b25","Type":"ContainerDied","Data":"1828efc73ccc249a8cee67af51bd91f0780e88cc9127586f3daf2443b02be907"} Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.065507 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.065950 4828 status_manager.go:851] "Failed to get status for pod" podUID="dd5b1db3-f574-4ff6-9160-f7daf0564b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.068007 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.069956 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.071031 4828 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519" exitCode=0 Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.071060 4828 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421" exitCode=0 Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.071071 4828 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38" exitCode=0 Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.071081 4828 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b" exitCode=2 Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.071138 4828 scope.go:117] "RemoveContainer" containerID="72fbf2dc76745ac97c0e6806b95cea72e8bced6d29245bb3662e66d3b209dee3" Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.071390 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fk7dk" podUID="10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" containerName="registry-server" containerID="cri-o://e679a24f8992a0a1909f21ec1bf63ee29463f59f3279f99d84fd841fa55b3484" gracePeriod=2 Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.071903 4828 status_manager.go:851] "Failed to get status for pod" podUID="10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" pod="openshift-marketplace/community-operators-fk7dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fk7dk\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.072172 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.072433 4828 status_manager.go:851] "Failed to get status for pod" podUID="dd5b1db3-f574-4ff6-9160-f7daf0564b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.307196 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v2zjd" Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.307937 4828 status_manager.go:851] "Failed to get status for pod" podUID="a8f1d24b-86b2-4b82-a397-85027c6090f0" pod="openshift-marketplace/redhat-operators-v2zjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v2zjd\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.308177 4828 status_manager.go:851] "Failed to get status for pod" podUID="10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" pod="openshift-marketplace/community-operators-fk7dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fk7dk\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.308400 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.308623 4828 status_manager.go:851] "Failed to get status for pod" podUID="dd5b1db3-f574-4ff6-9160-f7daf0564b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.347505 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v2zjd" Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.348382 4828 status_manager.go:851] "Failed to get status for pod" podUID="a8f1d24b-86b2-4b82-a397-85027c6090f0" pod="openshift-marketplace/redhat-operators-v2zjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v2zjd\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.348866 4828 status_manager.go:851] "Failed to get status for pod" podUID="10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" pod="openshift-marketplace/community-operators-fk7dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fk7dk\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.349243 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.349484 4828 status_manager.go:851] "Failed to get status for pod" podUID="dd5b1db3-f574-4ff6-9160-f7daf0564b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.467094 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fk7dk" Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.467625 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.467975 4828 status_manager.go:851] "Failed to get status for pod" podUID="dd5b1db3-f574-4ff6-9160-f7daf0564b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.468451 4828 status_manager.go:851] "Failed to get status for pod" podUID="a8f1d24b-86b2-4b82-a397-85027c6090f0" pod="openshift-marketplace/redhat-operators-v2zjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v2zjd\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.468715 4828 status_manager.go:851] "Failed to get status for pod" podUID="10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" pod="openshift-marketplace/community-operators-fk7dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fk7dk\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.666384 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10ab3b3d-1c03-4d60-8a16-c34b4a313e7b-utilities\") pod \"10ab3b3d-1c03-4d60-8a16-c34b4a313e7b\" (UID: \"10ab3b3d-1c03-4d60-8a16-c34b4a313e7b\") " Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.666463 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5276z\" (UniqueName: \"kubernetes.io/projected/10ab3b3d-1c03-4d60-8a16-c34b4a313e7b-kube-api-access-5276z\") pod \"10ab3b3d-1c03-4d60-8a16-c34b4a313e7b\" (UID: \"10ab3b3d-1c03-4d60-8a16-c34b4a313e7b\") " Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.666578 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10ab3b3d-1c03-4d60-8a16-c34b4a313e7b-catalog-content\") pod \"10ab3b3d-1c03-4d60-8a16-c34b4a313e7b\" (UID: \"10ab3b3d-1c03-4d60-8a16-c34b4a313e7b\") " Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.667753 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10ab3b3d-1c03-4d60-8a16-c34b4a313e7b-utilities" (OuterVolumeSpecName: "utilities") pod "10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" (UID: "10ab3b3d-1c03-4d60-8a16-c34b4a313e7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.674610 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10ab3b3d-1c03-4d60-8a16-c34b4a313e7b-kube-api-access-5276z" (OuterVolumeSpecName: "kube-api-access-5276z") pod "10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" (UID: "10ab3b3d-1c03-4d60-8a16-c34b4a313e7b"). InnerVolumeSpecName "kube-api-access-5276z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.729254 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10ab3b3d-1c03-4d60-8a16-c34b4a313e7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" (UID: "10ab3b3d-1c03-4d60-8a16-c34b4a313e7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.768028 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10ab3b3d-1c03-4d60-8a16-c34b4a313e7b-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.768073 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5276z\" (UniqueName: \"kubernetes.io/projected/10ab3b3d-1c03-4d60-8a16-c34b4a313e7b-kube-api-access-5276z\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:48 crc kubenswrapper[4828]: I1205 19:07:48.768087 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10ab3b3d-1c03-4d60-8a16-c34b4a313e7b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.080615 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.083382 4828 generic.go:334] "Generic (PLEG): container finished" podID="10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" containerID="e679a24f8992a0a1909f21ec1bf63ee29463f59f3279f99d84fd841fa55b3484" exitCode=0 Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.083424 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fk7dk" event={"ID":"10ab3b3d-1c03-4d60-8a16-c34b4a313e7b","Type":"ContainerDied","Data":"e679a24f8992a0a1909f21ec1bf63ee29463f59f3279f99d84fd841fa55b3484"} Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.083508 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fk7dk" Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.083778 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fk7dk" event={"ID":"10ab3b3d-1c03-4d60-8a16-c34b4a313e7b","Type":"ContainerDied","Data":"7192461de6764ea6b1085ed13f143c0a41f8b083e74ef2c214945bbf23cbbbfe"} Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.083812 4828 scope.go:117] "RemoveContainer" containerID="e679a24f8992a0a1909f21ec1bf63ee29463f59f3279f99d84fd841fa55b3484" Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.085101 4828 status_manager.go:851] "Failed to get status for pod" podUID="a8f1d24b-86b2-4b82-a397-85027c6090f0" pod="openshift-marketplace/redhat-operators-v2zjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v2zjd\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.085629 4828 status_manager.go:851] "Failed to get status for pod" podUID="10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" pod="openshift-marketplace/community-operators-fk7dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fk7dk\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.086136 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.086659 4828 status_manager.go:851] "Failed to get status for pod" podUID="dd5b1db3-f574-4ff6-9160-f7daf0564b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.102954 4828 scope.go:117] "RemoveContainer" containerID="c9ba57c711eedd2ba39884bfa6be2454861a52c77ec8a677396abf8b55c7e0ab" Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.108518 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.109228 4828 status_manager.go:851] "Failed to get status for pod" podUID="dd5b1db3-f574-4ff6-9160-f7daf0564b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.109685 4828 status_manager.go:851] "Failed to get status for pod" podUID="a8f1d24b-86b2-4b82-a397-85027c6090f0" pod="openshift-marketplace/redhat-operators-v2zjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v2zjd\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.110018 4828 status_manager.go:851] "Failed to get status for pod" podUID="10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" pod="openshift-marketplace/community-operators-fk7dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fk7dk\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.122085 4828 scope.go:117] "RemoveContainer" containerID="c19d3201e5ca7ed8271b8939a41f8e7bc382479a07c7b28cc1b51b4fb1135cdf" Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.159669 4828 scope.go:117] "RemoveContainer" containerID="e679a24f8992a0a1909f21ec1bf63ee29463f59f3279f99d84fd841fa55b3484" Dec 05 19:07:49 crc kubenswrapper[4828]: E1205 19:07:49.160219 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e679a24f8992a0a1909f21ec1bf63ee29463f59f3279f99d84fd841fa55b3484\": container with ID starting with e679a24f8992a0a1909f21ec1bf63ee29463f59f3279f99d84fd841fa55b3484 not found: ID does not exist" containerID="e679a24f8992a0a1909f21ec1bf63ee29463f59f3279f99d84fd841fa55b3484" Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.160259 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e679a24f8992a0a1909f21ec1bf63ee29463f59f3279f99d84fd841fa55b3484"} err="failed to get container status \"e679a24f8992a0a1909f21ec1bf63ee29463f59f3279f99d84fd841fa55b3484\": rpc error: code = NotFound desc = could not find container \"e679a24f8992a0a1909f21ec1bf63ee29463f59f3279f99d84fd841fa55b3484\": container with ID starting with e679a24f8992a0a1909f21ec1bf63ee29463f59f3279f99d84fd841fa55b3484 not found: ID does not exist" Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.160286 4828 scope.go:117] "RemoveContainer" containerID="c9ba57c711eedd2ba39884bfa6be2454861a52c77ec8a677396abf8b55c7e0ab" Dec 05 19:07:49 crc kubenswrapper[4828]: E1205 19:07:49.160667 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9ba57c711eedd2ba39884bfa6be2454861a52c77ec8a677396abf8b55c7e0ab\": container with ID starting with c9ba57c711eedd2ba39884bfa6be2454861a52c77ec8a677396abf8b55c7e0ab not found: ID does not exist" containerID="c9ba57c711eedd2ba39884bfa6be2454861a52c77ec8a677396abf8b55c7e0ab" Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.160685 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9ba57c711eedd2ba39884bfa6be2454861a52c77ec8a677396abf8b55c7e0ab"} err="failed to get container status \"c9ba57c711eedd2ba39884bfa6be2454861a52c77ec8a677396abf8b55c7e0ab\": rpc error: code = NotFound desc = could not find container \"c9ba57c711eedd2ba39884bfa6be2454861a52c77ec8a677396abf8b55c7e0ab\": container with ID starting with c9ba57c711eedd2ba39884bfa6be2454861a52c77ec8a677396abf8b55c7e0ab not found: ID does not exist" Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.160697 4828 scope.go:117] "RemoveContainer" containerID="c19d3201e5ca7ed8271b8939a41f8e7bc382479a07c7b28cc1b51b4fb1135cdf" Dec 05 19:07:49 crc kubenswrapper[4828]: E1205 19:07:49.161255 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c19d3201e5ca7ed8271b8939a41f8e7bc382479a07c7b28cc1b51b4fb1135cdf\": container with ID starting with c19d3201e5ca7ed8271b8939a41f8e7bc382479a07c7b28cc1b51b4fb1135cdf not found: ID does not exist" containerID="c19d3201e5ca7ed8271b8939a41f8e7bc382479a07c7b28cc1b51b4fb1135cdf" Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.161308 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c19d3201e5ca7ed8271b8939a41f8e7bc382479a07c7b28cc1b51b4fb1135cdf"} err="failed to get container status \"c19d3201e5ca7ed8271b8939a41f8e7bc382479a07c7b28cc1b51b4fb1135cdf\": rpc error: code = NotFound desc = could not find container \"c19d3201e5ca7ed8271b8939a41f8e7bc382479a07c7b28cc1b51b4fb1135cdf\": container with ID starting with c19d3201e5ca7ed8271b8939a41f8e7bc382479a07c7b28cc1b51b4fb1135cdf not found: ID does not exist" Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.363570 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.364200 4828 status_manager.go:851] "Failed to get status for pod" podUID="a8f1d24b-86b2-4b82-a397-85027c6090f0" pod="openshift-marketplace/redhat-operators-v2zjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v2zjd\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.364542 4828 status_manager.go:851] "Failed to get status for pod" podUID="10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" pod="openshift-marketplace/community-operators-fk7dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fk7dk\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.364985 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.365233 4828 status_manager.go:851] "Failed to get status for pod" podUID="dd5b1db3-f574-4ff6-9160-f7daf0564b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.476486 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd5b1db3-f574-4ff6-9160-f7daf0564b25-kube-api-access\") pod \"dd5b1db3-f574-4ff6-9160-f7daf0564b25\" (UID: \"dd5b1db3-f574-4ff6-9160-f7daf0564b25\") " Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.476535 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dd5b1db3-f574-4ff6-9160-f7daf0564b25-var-lock\") pod \"dd5b1db3-f574-4ff6-9160-f7daf0564b25\" (UID: \"dd5b1db3-f574-4ff6-9160-f7daf0564b25\") " Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.476636 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd5b1db3-f574-4ff6-9160-f7daf0564b25-kubelet-dir\") pod \"dd5b1db3-f574-4ff6-9160-f7daf0564b25\" (UID: \"dd5b1db3-f574-4ff6-9160-f7daf0564b25\") " Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.476659 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd5b1db3-f574-4ff6-9160-f7daf0564b25-var-lock" (OuterVolumeSpecName: "var-lock") pod "dd5b1db3-f574-4ff6-9160-f7daf0564b25" (UID: "dd5b1db3-f574-4ff6-9160-f7daf0564b25"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.476765 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd5b1db3-f574-4ff6-9160-f7daf0564b25-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dd5b1db3-f574-4ff6-9160-f7daf0564b25" (UID: "dd5b1db3-f574-4ff6-9160-f7daf0564b25"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.476927 4828 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd5b1db3-f574-4ff6-9160-f7daf0564b25-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.476945 4828 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dd5b1db3-f574-4ff6-9160-f7daf0564b25-var-lock\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:49 crc kubenswrapper[4828]: E1205 19:07:49.477150 4828 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" volumeName="registry-storage" Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.480885 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd5b1db3-f574-4ff6-9160-f7daf0564b25-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dd5b1db3-f574-4ff6-9160-f7daf0564b25" (UID: "dd5b1db3-f574-4ff6-9160-f7daf0564b25"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:07:49 crc kubenswrapper[4828]: I1205 19:07:49.579038 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd5b1db3-f574-4ff6-9160-f7daf0564b25-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:50 crc kubenswrapper[4828]: I1205 19:07:50.093327 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dd5b1db3-f574-4ff6-9160-f7daf0564b25","Type":"ContainerDied","Data":"8280d3c037271a01da908c285773cca1d182e6818d49e078fe408155b3a5d4b5"} Dec 05 19:07:50 crc kubenswrapper[4828]: I1205 19:07:50.094279 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8280d3c037271a01da908c285773cca1d182e6818d49e078fe408155b3a5d4b5" Dec 05 19:07:50 crc kubenswrapper[4828]: I1205 19:07:50.093764 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 05 19:07:50 crc kubenswrapper[4828]: I1205 19:07:50.099084 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 05 19:07:50 crc kubenswrapper[4828]: I1205 19:07:50.100390 4828 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96" exitCode=0 Dec 05 19:07:50 crc kubenswrapper[4828]: I1205 19:07:50.149609 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:50 crc kubenswrapper[4828]: I1205 19:07:50.150258 4828 status_manager.go:851] "Failed to get status for pod" podUID="dd5b1db3-f574-4ff6-9160-f7daf0564b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:50 crc kubenswrapper[4828]: I1205 19:07:50.150723 4828 status_manager.go:851] "Failed to get status for pod" podUID="a8f1d24b-86b2-4b82-a397-85027c6090f0" pod="openshift-marketplace/redhat-operators-v2zjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v2zjd\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:50 crc kubenswrapper[4828]: I1205 19:07:50.151159 4828 status_manager.go:851] "Failed to get status for pod" podUID="10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" pod="openshift-marketplace/community-operators-fk7dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fk7dk\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:50 crc kubenswrapper[4828]: I1205 19:07:50.590997 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 05 19:07:50 crc kubenswrapper[4828]: I1205 19:07:50.592125 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:07:50 crc kubenswrapper[4828]: I1205 19:07:50.592732 4828 status_manager.go:851] "Failed to get status for pod" podUID="a8f1d24b-86b2-4b82-a397-85027c6090f0" pod="openshift-marketplace/redhat-operators-v2zjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v2zjd\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:50 crc kubenswrapper[4828]: I1205 19:07:50.593109 4828 status_manager.go:851] "Failed to get status for pod" podUID="10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" pod="openshift-marketplace/community-operators-fk7dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fk7dk\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:50 crc kubenswrapper[4828]: I1205 19:07:50.593320 4828 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:50 crc kubenswrapper[4828]: I1205 19:07:50.593621 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:50 crc kubenswrapper[4828]: I1205 19:07:50.594121 4828 status_manager.go:851] "Failed to get status for pod" podUID="dd5b1db3-f574-4ff6-9160-f7daf0564b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:50 crc kubenswrapper[4828]: I1205 19:07:50.631088 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 05 19:07:50 crc kubenswrapper[4828]: I1205 19:07:50.631318 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 05 19:07:50 crc kubenswrapper[4828]: I1205 19:07:50.631158 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:07:50 crc kubenswrapper[4828]: I1205 19:07:50.631483 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:07:50 crc kubenswrapper[4828]: I1205 19:07:50.631611 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 05 19:07:50 crc kubenswrapper[4828]: I1205 19:07:50.631657 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:07:50 crc kubenswrapper[4828]: I1205 19:07:50.632005 4828 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:50 crc kubenswrapper[4828]: I1205 19:07:50.632081 4828 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:50 crc kubenswrapper[4828]: I1205 19:07:50.632147 4828 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:51 crc kubenswrapper[4828]: I1205 19:07:51.109813 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 05 19:07:51 crc kubenswrapper[4828]: I1205 19:07:51.111325 4828 scope.go:117] "RemoveContainer" containerID="6a2c4c9ca67bd949c197496bd8a05759363b496ad81fd4c833755cac8ef47519" Dec 05 19:07:51 crc kubenswrapper[4828]: I1205 19:07:51.111459 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:07:51 crc kubenswrapper[4828]: I1205 19:07:51.128356 4828 scope.go:117] "RemoveContainer" containerID="db85c1132a3350fada7256b71e5458bd964c19ca4b2025e1c10c169b6015b421" Dec 05 19:07:51 crc kubenswrapper[4828]: I1205 19:07:51.131052 4828 status_manager.go:851] "Failed to get status for pod" podUID="dd5b1db3-f574-4ff6-9160-f7daf0564b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:51 crc kubenswrapper[4828]: I1205 19:07:51.131419 4828 status_manager.go:851] "Failed to get status for pod" podUID="a8f1d24b-86b2-4b82-a397-85027c6090f0" pod="openshift-marketplace/redhat-operators-v2zjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v2zjd\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:51 crc kubenswrapper[4828]: I1205 19:07:51.131840 4828 status_manager.go:851] "Failed to get status for pod" podUID="10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" pod="openshift-marketplace/community-operators-fk7dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fk7dk\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:51 crc kubenswrapper[4828]: I1205 19:07:51.132307 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:51 crc kubenswrapper[4828]: I1205 19:07:51.132614 4828 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:51 crc kubenswrapper[4828]: I1205 19:07:51.143764 4828 scope.go:117] "RemoveContainer" containerID="0351ac3afcead545dd6d1a628dd09726da2962cbd3644697193badc364c45e38" Dec 05 19:07:51 crc kubenswrapper[4828]: I1205 19:07:51.158367 4828 scope.go:117] "RemoveContainer" containerID="4bfb30570d972ecd70d62496ae281db2c0c4a31c95ac5de5d21f8900282e7c9b" Dec 05 19:07:51 crc kubenswrapper[4828]: I1205 19:07:51.172396 4828 scope.go:117] "RemoveContainer" containerID="e3e64838b5f36b1e9954a47f29bf7d5c895d481b9afeb9576746196f1c507b96" Dec 05 19:07:51 crc kubenswrapper[4828]: I1205 19:07:51.187958 4828 scope.go:117] "RemoveContainer" containerID="975ee518caf0b8eb4232bb6719381f17ba24b0bb587eb1cdec1c33ec8676d5d9" Dec 05 19:07:52 crc kubenswrapper[4828]: I1205 19:07:52.449024 4828 status_manager.go:851] "Failed to get status for pod" podUID="dd5b1db3-f574-4ff6-9160-f7daf0564b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:52 crc kubenswrapper[4828]: I1205 19:07:52.449722 4828 status_manager.go:851] "Failed to get status for pod" podUID="a8f1d24b-86b2-4b82-a397-85027c6090f0" pod="openshift-marketplace/redhat-operators-v2zjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v2zjd\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:52 crc kubenswrapper[4828]: I1205 19:07:52.450310 4828 status_manager.go:851] "Failed to get status for pod" podUID="10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" pod="openshift-marketplace/community-operators-fk7dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fk7dk\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:52 crc kubenswrapper[4828]: I1205 19:07:52.450899 4828 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:52 crc kubenswrapper[4828]: I1205 19:07:52.451391 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:52 crc kubenswrapper[4828]: I1205 19:07:52.458696 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 05 19:07:53 crc kubenswrapper[4828]: I1205 19:07:53.864648 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" podUID="d5b5fb60-4709-4e6c-b9a6-ba869094f1e5" containerName="oauth-openshift" containerID="cri-o://180cd1b0b58dc0c4a32f9425bcf73527864c84ce4a12f0458a9d6816e4160bf5" gracePeriod=15 Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.132151 4828 generic.go:334] "Generic (PLEG): container finished" podID="d5b5fb60-4709-4e6c-b9a6-ba869094f1e5" containerID="180cd1b0b58dc0c4a32f9425bcf73527864c84ce4a12f0458a9d6816e4160bf5" exitCode=0 Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.132191 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" event={"ID":"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5","Type":"ContainerDied","Data":"180cd1b0b58dc0c4a32f9425bcf73527864c84ce4a12f0458a9d6816e4160bf5"} Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.838232 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.839015 4828 status_manager.go:851] "Failed to get status for pod" podUID="a8f1d24b-86b2-4b82-a397-85027c6090f0" pod="openshift-marketplace/redhat-operators-v2zjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v2zjd\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.839347 4828 status_manager.go:851] "Failed to get status for pod" podUID="10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" pod="openshift-marketplace/community-operators-fk7dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fk7dk\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.839526 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.839700 4828 status_manager.go:851] "Failed to get status for pod" podUID="dd5b1db3-f574-4ff6-9160-f7daf0564b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.839953 4828 status_manager.go:851] "Failed to get status for pod" podUID="d5b5fb60-4709-4e6c-b9a6-ba869094f1e5" pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-b6nk4\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.883587 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-user-template-login\") pod \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.883638 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-user-template-provider-selection\") pod \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.883663 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-user-template-error\") pod \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.883698 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4qq6\" (UniqueName: \"kubernetes.io/projected/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-kube-api-access-q4qq6\") pod \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.883717 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-audit-dir\") pod \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.883739 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-cliconfig\") pod \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.883757 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-user-idp-0-file-data\") pod \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.883777 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-serving-cert\") pod \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.883796 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-session\") pod \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.883839 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-trusted-ca-bundle\") pod \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.883859 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-service-ca\") pod \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.883890 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-ocp-branding-template\") pod \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.883933 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-audit-policies\") pod \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.883954 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-router-certs\") pod \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\" (UID: \"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5\") " Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.884857 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "d5b5fb60-4709-4e6c-b9a6-ba869094f1e5" (UID: "d5b5fb60-4709-4e6c-b9a6-ba869094f1e5"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.885355 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "d5b5fb60-4709-4e6c-b9a6-ba869094f1e5" (UID: "d5b5fb60-4709-4e6c-b9a6-ba869094f1e5"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.885370 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "d5b5fb60-4709-4e6c-b9a6-ba869094f1e5" (UID: "d5b5fb60-4709-4e6c-b9a6-ba869094f1e5"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.885972 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "d5b5fb60-4709-4e6c-b9a6-ba869094f1e5" (UID: "d5b5fb60-4709-4e6c-b9a6-ba869094f1e5"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.886253 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "d5b5fb60-4709-4e6c-b9a6-ba869094f1e5" (UID: "d5b5fb60-4709-4e6c-b9a6-ba869094f1e5"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.888742 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "d5b5fb60-4709-4e6c-b9a6-ba869094f1e5" (UID: "d5b5fb60-4709-4e6c-b9a6-ba869094f1e5"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.889002 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "d5b5fb60-4709-4e6c-b9a6-ba869094f1e5" (UID: "d5b5fb60-4709-4e6c-b9a6-ba869094f1e5"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.889195 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "d5b5fb60-4709-4e6c-b9a6-ba869094f1e5" (UID: "d5b5fb60-4709-4e6c-b9a6-ba869094f1e5"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.889462 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "d5b5fb60-4709-4e6c-b9a6-ba869094f1e5" (UID: "d5b5fb60-4709-4e6c-b9a6-ba869094f1e5"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.889623 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "d5b5fb60-4709-4e6c-b9a6-ba869094f1e5" (UID: "d5b5fb60-4709-4e6c-b9a6-ba869094f1e5"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.889975 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "d5b5fb60-4709-4e6c-b9a6-ba869094f1e5" (UID: "d5b5fb60-4709-4e6c-b9a6-ba869094f1e5"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.893108 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "d5b5fb60-4709-4e6c-b9a6-ba869094f1e5" (UID: "d5b5fb60-4709-4e6c-b9a6-ba869094f1e5"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.896179 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "d5b5fb60-4709-4e6c-b9a6-ba869094f1e5" (UID: "d5b5fb60-4709-4e6c-b9a6-ba869094f1e5"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.896334 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-kube-api-access-q4qq6" (OuterVolumeSpecName: "kube-api-access-q4qq6") pod "d5b5fb60-4709-4e6c-b9a6-ba869094f1e5" (UID: "d5b5fb60-4709-4e6c-b9a6-ba869094f1e5"). InnerVolumeSpecName "kube-api-access-q4qq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.984636 4828 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.984689 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.984701 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.984713 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.984724 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.984733 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4qq6\" (UniqueName: \"kubernetes.io/projected/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-kube-api-access-q4qq6\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.984746 4828 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.984764 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.984777 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.984789 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.984800 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.984809 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.984819 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:54 crc kubenswrapper[4828]: I1205 19:07:54.984846 4828 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 05 19:07:55 crc kubenswrapper[4828]: I1205 19:07:55.139263 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" event={"ID":"d5b5fb60-4709-4e6c-b9a6-ba869094f1e5","Type":"ContainerDied","Data":"fee5ed0b23789fc8750897f7c70f2ed51cc9349009015f5a5271c368611155f3"} Dec 05 19:07:55 crc kubenswrapper[4828]: I1205 19:07:55.139333 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" Dec 05 19:07:55 crc kubenswrapper[4828]: I1205 19:07:55.139347 4828 scope.go:117] "RemoveContainer" containerID="180cd1b0b58dc0c4a32f9425bcf73527864c84ce4a12f0458a9d6816e4160bf5" Dec 05 19:07:55 crc kubenswrapper[4828]: I1205 19:07:55.140663 4828 status_manager.go:851] "Failed to get status for pod" podUID="a8f1d24b-86b2-4b82-a397-85027c6090f0" pod="openshift-marketplace/redhat-operators-v2zjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v2zjd\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:55 crc kubenswrapper[4828]: I1205 19:07:55.141268 4828 status_manager.go:851] "Failed to get status for pod" podUID="10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" pod="openshift-marketplace/community-operators-fk7dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fk7dk\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:55 crc kubenswrapper[4828]: I1205 19:07:55.141885 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:55 crc kubenswrapper[4828]: I1205 19:07:55.142364 4828 status_manager.go:851] "Failed to get status for pod" podUID="dd5b1db3-f574-4ff6-9160-f7daf0564b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:55 crc kubenswrapper[4828]: I1205 19:07:55.143109 4828 status_manager.go:851] "Failed to get status for pod" podUID="d5b5fb60-4709-4e6c-b9a6-ba869094f1e5" pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-b6nk4\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:55 crc kubenswrapper[4828]: I1205 19:07:55.165779 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:55 crc kubenswrapper[4828]: I1205 19:07:55.166748 4828 status_manager.go:851] "Failed to get status for pod" podUID="dd5b1db3-f574-4ff6-9160-f7daf0564b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:55 crc kubenswrapper[4828]: I1205 19:07:55.167425 4828 status_manager.go:851] "Failed to get status for pod" podUID="d5b5fb60-4709-4e6c-b9a6-ba869094f1e5" pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-b6nk4\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:55 crc kubenswrapper[4828]: I1205 19:07:55.168059 4828 status_manager.go:851] "Failed to get status for pod" podUID="a8f1d24b-86b2-4b82-a397-85027c6090f0" pod="openshift-marketplace/redhat-operators-v2zjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v2zjd\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:55 crc kubenswrapper[4828]: I1205 19:07:55.168459 4828 status_manager.go:851] "Failed to get status for pod" podUID="10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" pod="openshift-marketplace/community-operators-fk7dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fk7dk\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:57 crc kubenswrapper[4828]: E1205 19:07:57.485234 4828 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.98:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187e67492264e0f0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Created,Message:Created container startup-monitor,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-05 19:07:47.783123184 +0000 UTC m=+245.678345490,LastTimestamp:2025-12-05 19:07:47.783123184 +0000 UTC m=+245.678345490,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 05 19:07:57 crc kubenswrapper[4828]: E1205 19:07:57.807133 4828 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:57 crc kubenswrapper[4828]: E1205 19:07:57.807397 4828 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:57 crc kubenswrapper[4828]: E1205 19:07:57.807679 4828 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:57 crc kubenswrapper[4828]: E1205 19:07:57.808051 4828 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:57 crc kubenswrapper[4828]: E1205 19:07:57.808254 4828 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:57 crc kubenswrapper[4828]: I1205 19:07:57.808281 4828 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 05 19:07:57 crc kubenswrapper[4828]: E1205 19:07:57.808652 4828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="200ms" Dec 05 19:07:58 crc kubenswrapper[4828]: E1205 19:07:58.010145 4828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="400ms" Dec 05 19:07:58 crc kubenswrapper[4828]: E1205 19:07:58.411170 4828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="800ms" Dec 05 19:07:59 crc kubenswrapper[4828]: E1205 19:07:59.212903 4828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="1.6s" Dec 05 19:07:59 crc kubenswrapper[4828]: I1205 19:07:59.445567 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:07:59 crc kubenswrapper[4828]: I1205 19:07:59.446362 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:59 crc kubenswrapper[4828]: I1205 19:07:59.447098 4828 status_manager.go:851] "Failed to get status for pod" podUID="dd5b1db3-f574-4ff6-9160-f7daf0564b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:59 crc kubenswrapper[4828]: I1205 19:07:59.447730 4828 status_manager.go:851] "Failed to get status for pod" podUID="d5b5fb60-4709-4e6c-b9a6-ba869094f1e5" pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-b6nk4\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:59 crc kubenswrapper[4828]: I1205 19:07:59.447962 4828 status_manager.go:851] "Failed to get status for pod" podUID="a8f1d24b-86b2-4b82-a397-85027c6090f0" pod="openshift-marketplace/redhat-operators-v2zjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v2zjd\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:59 crc kubenswrapper[4828]: I1205 19:07:59.448404 4828 status_manager.go:851] "Failed to get status for pod" podUID="10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" pod="openshift-marketplace/community-operators-fk7dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fk7dk\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:07:59 crc kubenswrapper[4828]: I1205 19:07:59.469908 4828 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1662aeae-9ff0-4304-8fc5-c957c2ae9f39" Dec 05 19:07:59 crc kubenswrapper[4828]: I1205 19:07:59.469970 4828 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1662aeae-9ff0-4304-8fc5-c957c2ae9f39" Dec 05 19:07:59 crc kubenswrapper[4828]: E1205 19:07:59.470671 4828 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:07:59 crc kubenswrapper[4828]: I1205 19:07:59.471534 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:08:00 crc kubenswrapper[4828]: I1205 19:08:00.174897 4828 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="883565f957e8801f6489dff088fcabfdfb4c8cdf86feb8bd22440ddbd57a82fb" exitCode=0 Dec 05 19:08:00 crc kubenswrapper[4828]: I1205 19:08:00.175014 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"883565f957e8801f6489dff088fcabfdfb4c8cdf86feb8bd22440ddbd57a82fb"} Dec 05 19:08:00 crc kubenswrapper[4828]: I1205 19:08:00.175218 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"469eafabde110c7a298e10db97decf94204fcb65068f7007d2c09a73aa96f890"} Dec 05 19:08:00 crc kubenswrapper[4828]: I1205 19:08:00.175629 4828 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1662aeae-9ff0-4304-8fc5-c957c2ae9f39" Dec 05 19:08:00 crc kubenswrapper[4828]: I1205 19:08:00.175673 4828 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1662aeae-9ff0-4304-8fc5-c957c2ae9f39" Dec 05 19:08:00 crc kubenswrapper[4828]: I1205 19:08:00.176096 4828 status_manager.go:851] "Failed to get status for pod" podUID="a8f1d24b-86b2-4b82-a397-85027c6090f0" pod="openshift-marketplace/redhat-operators-v2zjd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v2zjd\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:08:00 crc kubenswrapper[4828]: E1205 19:08:00.176322 4828 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:08:00 crc kubenswrapper[4828]: I1205 19:08:00.176511 4828 status_manager.go:851] "Failed to get status for pod" podUID="10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" pod="openshift-marketplace/community-operators-fk7dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fk7dk\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:08:00 crc kubenswrapper[4828]: I1205 19:08:00.176797 4828 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:08:00 crc kubenswrapper[4828]: I1205 19:08:00.177402 4828 status_manager.go:851] "Failed to get status for pod" podUID="dd5b1db3-f574-4ff6-9160-f7daf0564b25" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:08:00 crc kubenswrapper[4828]: I1205 19:08:00.177783 4828 status_manager.go:851] "Failed to get status for pod" podUID="d5b5fb60-4709-4e6c-b9a6-ba869094f1e5" pod="openshift-authentication/oauth-openshift-558db77b4-b6nk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-b6nk4\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 05 19:08:01 crc kubenswrapper[4828]: I1205 19:08:01.196080 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"faeaf75338b0c4c6bc218ac369c70d82c374b79db043a543df736fd8b12f9f00"} Dec 05 19:08:01 crc kubenswrapper[4828]: I1205 19:08:01.196130 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"34f3e298e75b632e991c97869248b5f08bd04ea9c36583e3b921d696a70250f5"} Dec 05 19:08:01 crc kubenswrapper[4828]: I1205 19:08:01.196141 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1c087bc770a45d855009f8aae8186a1837a6cf3d9bd07991a09c52970c01537c"} Dec 05 19:08:01 crc kubenswrapper[4828]: I1205 19:08:01.196150 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6c8bb735eeb0ec9cd7135846feb8c7e35c0403d30d87429f680f9be88642bccc"} Dec 05 19:08:02 crc kubenswrapper[4828]: I1205 19:08:02.206776 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 05 19:08:02 crc kubenswrapper[4828]: I1205 19:08:02.207120 4828 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c" exitCode=1 Dec 05 19:08:02 crc kubenswrapper[4828]: I1205 19:08:02.207181 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c"} Dec 05 19:08:02 crc kubenswrapper[4828]: I1205 19:08:02.207687 4828 scope.go:117] "RemoveContainer" containerID="957283bc0e9f77eb2451b32e2eb640d85529ea612a0a1e490904bdaf0b9fad9c" Dec 05 19:08:02 crc kubenswrapper[4828]: I1205 19:08:02.210780 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bbfa5eab5a982d687f5400421a4b2baa52e41d55d250746ffb610e42fdd146d5"} Dec 05 19:08:02 crc kubenswrapper[4828]: I1205 19:08:02.210998 4828 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1662aeae-9ff0-4304-8fc5-c957c2ae9f39" Dec 05 19:08:02 crc kubenswrapper[4828]: I1205 19:08:02.211015 4828 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1662aeae-9ff0-4304-8fc5-c957c2ae9f39" Dec 05 19:08:02 crc kubenswrapper[4828]: I1205 19:08:02.211198 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:08:02 crc kubenswrapper[4828]: I1205 19:08:02.597481 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 19:08:03 crc kubenswrapper[4828]: I1205 19:08:03.222122 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 05 19:08:03 crc kubenswrapper[4828]: I1205 19:08:03.222361 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7cbef1a5d97d3328d6beb1cea2cd891f9d7efbbfc757e474d8beb8ca17d8f39c"} Dec 05 19:08:04 crc kubenswrapper[4828]: I1205 19:08:04.074274 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 19:08:04 crc kubenswrapper[4828]: I1205 19:08:04.472625 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:08:04 crc kubenswrapper[4828]: I1205 19:08:04.472695 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:08:04 crc kubenswrapper[4828]: I1205 19:08:04.482320 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:08:04 crc kubenswrapper[4828]: I1205 19:08:04.950304 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 19:08:04 crc kubenswrapper[4828]: I1205 19:08:04.954441 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 19:08:07 crc kubenswrapper[4828]: I1205 19:08:07.234060 4828 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:08:07 crc kubenswrapper[4828]: I1205 19:08:07.386322 4828 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2dd4cfc5-9f9c-4597-9bf8-6513eaaeaaea" Dec 05 19:08:08 crc kubenswrapper[4828]: I1205 19:08:08.250086 4828 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1662aeae-9ff0-4304-8fc5-c957c2ae9f39" Dec 05 19:08:08 crc kubenswrapper[4828]: I1205 19:08:08.250115 4828 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1662aeae-9ff0-4304-8fc5-c957c2ae9f39" Dec 05 19:08:08 crc kubenswrapper[4828]: I1205 19:08:08.254481 4828 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2dd4cfc5-9f9c-4597-9bf8-6513eaaeaaea" Dec 05 19:08:08 crc kubenswrapper[4828]: I1205 19:08:08.254881 4828 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://6c8bb735eeb0ec9cd7135846feb8c7e35c0403d30d87429f680f9be88642bccc" Dec 05 19:08:08 crc kubenswrapper[4828]: I1205 19:08:08.254909 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:08:09 crc kubenswrapper[4828]: I1205 19:08:09.255187 4828 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1662aeae-9ff0-4304-8fc5-c957c2ae9f39" Dec 05 19:08:09 crc kubenswrapper[4828]: I1205 19:08:09.255216 4828 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1662aeae-9ff0-4304-8fc5-c957c2ae9f39" Dec 05 19:08:09 crc kubenswrapper[4828]: I1205 19:08:09.258344 4828 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2dd4cfc5-9f9c-4597-9bf8-6513eaaeaaea" Dec 05 19:08:14 crc kubenswrapper[4828]: I1205 19:08:14.078579 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 19:08:17 crc kubenswrapper[4828]: I1205 19:08:17.753989 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 05 19:08:17 crc kubenswrapper[4828]: I1205 19:08:17.902600 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 05 19:08:18 crc kubenswrapper[4828]: I1205 19:08:18.250211 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 05 19:08:19 crc kubenswrapper[4828]: I1205 19:08:19.138781 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 05 19:08:19 crc kubenswrapper[4828]: I1205 19:08:19.691288 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 05 19:08:19 crc kubenswrapper[4828]: I1205 19:08:19.925777 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 05 19:08:20 crc kubenswrapper[4828]: I1205 19:08:20.065097 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 05 19:08:20 crc kubenswrapper[4828]: I1205 19:08:20.086500 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 05 19:08:20 crc kubenswrapper[4828]: I1205 19:08:20.167885 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 05 19:08:20 crc kubenswrapper[4828]: I1205 19:08:20.196806 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 05 19:08:20 crc kubenswrapper[4828]: I1205 19:08:20.284641 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 05 19:08:20 crc kubenswrapper[4828]: I1205 19:08:20.376284 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 05 19:08:20 crc kubenswrapper[4828]: I1205 19:08:20.471348 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 05 19:08:20 crc kubenswrapper[4828]: I1205 19:08:20.686117 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 05 19:08:20 crc kubenswrapper[4828]: I1205 19:08:20.899865 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 05 19:08:21 crc kubenswrapper[4828]: I1205 19:08:21.083069 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 05 19:08:21 crc kubenswrapper[4828]: I1205 19:08:21.257404 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 05 19:08:21 crc kubenswrapper[4828]: I1205 19:08:21.375006 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 05 19:08:21 crc kubenswrapper[4828]: I1205 19:08:21.421418 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 05 19:08:21 crc kubenswrapper[4828]: I1205 19:08:21.594530 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 05 19:08:21 crc kubenswrapper[4828]: I1205 19:08:21.665268 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 05 19:08:21 crc kubenswrapper[4828]: I1205 19:08:21.714490 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 05 19:08:21 crc kubenswrapper[4828]: I1205 19:08:21.909095 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 05 19:08:21 crc kubenswrapper[4828]: I1205 19:08:21.923617 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 05 19:08:21 crc kubenswrapper[4828]: I1205 19:08:21.929092 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 05 19:08:21 crc kubenswrapper[4828]: I1205 19:08:21.947206 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 05 19:08:21 crc kubenswrapper[4828]: I1205 19:08:21.956237 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 05 19:08:22 crc kubenswrapper[4828]: I1205 19:08:22.081639 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 05 19:08:22 crc kubenswrapper[4828]: I1205 19:08:22.082939 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 05 19:08:22 crc kubenswrapper[4828]: I1205 19:08:22.162532 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 05 19:08:22 crc kubenswrapper[4828]: I1205 19:08:22.259053 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 05 19:08:22 crc kubenswrapper[4828]: I1205 19:08:22.291108 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 05 19:08:22 crc kubenswrapper[4828]: I1205 19:08:22.357155 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 05 19:08:22 crc kubenswrapper[4828]: I1205 19:08:22.518712 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 05 19:08:22 crc kubenswrapper[4828]: I1205 19:08:22.519714 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 05 19:08:22 crc kubenswrapper[4828]: I1205 19:08:22.543015 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 05 19:08:22 crc kubenswrapper[4828]: I1205 19:08:22.544739 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 05 19:08:22 crc kubenswrapper[4828]: I1205 19:08:22.564237 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 05 19:08:22 crc kubenswrapper[4828]: I1205 19:08:22.576628 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 05 19:08:22 crc kubenswrapper[4828]: I1205 19:08:22.603652 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 05 19:08:22 crc kubenswrapper[4828]: I1205 19:08:22.650386 4828 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 05 19:08:22 crc kubenswrapper[4828]: I1205 19:08:22.703427 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 05 19:08:22 crc kubenswrapper[4828]: I1205 19:08:22.750671 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 05 19:08:22 crc kubenswrapper[4828]: I1205 19:08:22.803903 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 05 19:08:22 crc kubenswrapper[4828]: I1205 19:08:22.903024 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 05 19:08:22 crc kubenswrapper[4828]: I1205 19:08:22.999497 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 05 19:08:23 crc kubenswrapper[4828]: I1205 19:08:23.064745 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 05 19:08:23 crc kubenswrapper[4828]: I1205 19:08:23.152737 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 05 19:08:23 crc kubenswrapper[4828]: I1205 19:08:23.211397 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 05 19:08:23 crc kubenswrapper[4828]: I1205 19:08:23.254132 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 05 19:08:23 crc kubenswrapper[4828]: I1205 19:08:23.275811 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 05 19:08:23 crc kubenswrapper[4828]: I1205 19:08:23.380581 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 05 19:08:23 crc kubenswrapper[4828]: I1205 19:08:23.398487 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 05 19:08:23 crc kubenswrapper[4828]: I1205 19:08:23.423841 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 05 19:08:23 crc kubenswrapper[4828]: I1205 19:08:23.506240 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 05 19:08:23 crc kubenswrapper[4828]: I1205 19:08:23.528201 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 05 19:08:23 crc kubenswrapper[4828]: I1205 19:08:23.558311 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 05 19:08:23 crc kubenswrapper[4828]: I1205 19:08:23.563327 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 05 19:08:23 crc kubenswrapper[4828]: I1205 19:08:23.594305 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 05 19:08:23 crc kubenswrapper[4828]: I1205 19:08:23.727122 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 05 19:08:23 crc kubenswrapper[4828]: I1205 19:08:23.803419 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 05 19:08:23 crc kubenswrapper[4828]: I1205 19:08:23.806812 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 05 19:08:23 crc kubenswrapper[4828]: I1205 19:08:23.876087 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 05 19:08:23 crc kubenswrapper[4828]: I1205 19:08:23.924141 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 05 19:08:24 crc kubenswrapper[4828]: I1205 19:08:24.003146 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 05 19:08:24 crc kubenswrapper[4828]: I1205 19:08:24.049291 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 05 19:08:24 crc kubenswrapper[4828]: I1205 19:08:24.053273 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 05 19:08:24 crc kubenswrapper[4828]: I1205 19:08:24.293588 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 05 19:08:24 crc kubenswrapper[4828]: I1205 19:08:24.302289 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 05 19:08:24 crc kubenswrapper[4828]: I1205 19:08:24.336981 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 05 19:08:24 crc kubenswrapper[4828]: I1205 19:08:24.550842 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 05 19:08:24 crc kubenswrapper[4828]: I1205 19:08:24.554064 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 05 19:08:24 crc kubenswrapper[4828]: I1205 19:08:24.585813 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 05 19:08:24 crc kubenswrapper[4828]: I1205 19:08:24.597962 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 05 19:08:24 crc kubenswrapper[4828]: I1205 19:08:24.760471 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 05 19:08:24 crc kubenswrapper[4828]: I1205 19:08:24.772492 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 05 19:08:24 crc kubenswrapper[4828]: I1205 19:08:24.817013 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 05 19:08:24 crc kubenswrapper[4828]: I1205 19:08:24.985906 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 05 19:08:25 crc kubenswrapper[4828]: I1205 19:08:25.006061 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 05 19:08:25 crc kubenswrapper[4828]: I1205 19:08:25.084731 4828 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 05 19:08:25 crc kubenswrapper[4828]: I1205 19:08:25.104309 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 05 19:08:25 crc kubenswrapper[4828]: I1205 19:08:25.222281 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 05 19:08:25 crc kubenswrapper[4828]: I1205 19:08:25.320544 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 05 19:08:25 crc kubenswrapper[4828]: I1205 19:08:25.358161 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 05 19:08:25 crc kubenswrapper[4828]: I1205 19:08:25.480981 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 05 19:08:25 crc kubenswrapper[4828]: I1205 19:08:25.534079 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 05 19:08:25 crc kubenswrapper[4828]: I1205 19:08:25.654236 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 05 19:08:25 crc kubenswrapper[4828]: I1205 19:08:25.698847 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 05 19:08:25 crc kubenswrapper[4828]: I1205 19:08:25.705182 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 05 19:08:25 crc kubenswrapper[4828]: I1205 19:08:25.793036 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 05 19:08:25 crc kubenswrapper[4828]: I1205 19:08:25.799039 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 05 19:08:25 crc kubenswrapper[4828]: I1205 19:08:25.945929 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 05 19:08:26 crc kubenswrapper[4828]: I1205 19:08:26.018810 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 05 19:08:26 crc kubenswrapper[4828]: I1205 19:08:26.252811 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 05 19:08:26 crc kubenswrapper[4828]: I1205 19:08:26.264514 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 05 19:08:26 crc kubenswrapper[4828]: I1205 19:08:26.304901 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 05 19:08:26 crc kubenswrapper[4828]: I1205 19:08:26.319650 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 05 19:08:26 crc kubenswrapper[4828]: I1205 19:08:26.321264 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 05 19:08:26 crc kubenswrapper[4828]: I1205 19:08:26.386903 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 05 19:08:26 crc kubenswrapper[4828]: I1205 19:08:26.422764 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 05 19:08:26 crc kubenswrapper[4828]: I1205 19:08:26.443484 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 05 19:08:26 crc kubenswrapper[4828]: I1205 19:08:26.443930 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 05 19:08:26 crc kubenswrapper[4828]: I1205 19:08:26.548167 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 05 19:08:26 crc kubenswrapper[4828]: I1205 19:08:26.563062 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 05 19:08:26 crc kubenswrapper[4828]: I1205 19:08:26.635270 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 05 19:08:26 crc kubenswrapper[4828]: I1205 19:08:26.635584 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 05 19:08:26 crc kubenswrapper[4828]: I1205 19:08:26.639480 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 05 19:08:26 crc kubenswrapper[4828]: I1205 19:08:26.693940 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 05 19:08:26 crc kubenswrapper[4828]: I1205 19:08:26.813393 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 05 19:08:26 crc kubenswrapper[4828]: I1205 19:08:26.814518 4828 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 05 19:08:26 crc kubenswrapper[4828]: I1205 19:08:26.824634 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 05 19:08:26 crc kubenswrapper[4828]: I1205 19:08:26.838374 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 05 19:08:26 crc kubenswrapper[4828]: I1205 19:08:26.863186 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 05 19:08:26 crc kubenswrapper[4828]: I1205 19:08:26.926482 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 05 19:08:26 crc kubenswrapper[4828]: I1205 19:08:26.934541 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 05 19:08:27 crc kubenswrapper[4828]: I1205 19:08:27.069345 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 05 19:08:27 crc kubenswrapper[4828]: I1205 19:08:27.096600 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 05 19:08:27 crc kubenswrapper[4828]: I1205 19:08:27.116005 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 05 19:08:27 crc kubenswrapper[4828]: I1205 19:08:27.121373 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 05 19:08:27 crc kubenswrapper[4828]: I1205 19:08:27.143388 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 05 19:08:27 crc kubenswrapper[4828]: I1205 19:08:27.180893 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 05 19:08:27 crc kubenswrapper[4828]: I1205 19:08:27.275744 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 05 19:08:27 crc kubenswrapper[4828]: I1205 19:08:27.293949 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 05 19:08:27 crc kubenswrapper[4828]: I1205 19:08:27.377081 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 05 19:08:27 crc kubenswrapper[4828]: I1205 19:08:27.427359 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 05 19:08:27 crc kubenswrapper[4828]: I1205 19:08:27.465266 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 05 19:08:27 crc kubenswrapper[4828]: I1205 19:08:27.479164 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 05 19:08:27 crc kubenswrapper[4828]: I1205 19:08:27.479200 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 05 19:08:27 crc kubenswrapper[4828]: I1205 19:08:27.598290 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 05 19:08:27 crc kubenswrapper[4828]: I1205 19:08:27.633506 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 05 19:08:27 crc kubenswrapper[4828]: I1205 19:08:27.781851 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 05 19:08:27 crc kubenswrapper[4828]: I1205 19:08:27.802109 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 05 19:08:27 crc kubenswrapper[4828]: I1205 19:08:27.838971 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 05 19:08:27 crc kubenswrapper[4828]: I1205 19:08:27.857126 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 05 19:08:27 crc kubenswrapper[4828]: I1205 19:08:27.928508 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 05 19:08:27 crc kubenswrapper[4828]: I1205 19:08:27.937540 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 05 19:08:27 crc kubenswrapper[4828]: I1205 19:08:27.985703 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 05 19:08:27 crc kubenswrapper[4828]: I1205 19:08:27.987890 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 05 19:08:28 crc kubenswrapper[4828]: I1205 19:08:28.154335 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 05 19:08:28 crc kubenswrapper[4828]: I1205 19:08:28.203057 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 05 19:08:28 crc kubenswrapper[4828]: I1205 19:08:28.264398 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 05 19:08:28 crc kubenswrapper[4828]: I1205 19:08:28.270059 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 05 19:08:28 crc kubenswrapper[4828]: I1205 19:08:28.288542 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 05 19:08:28 crc kubenswrapper[4828]: I1205 19:08:28.423618 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 05 19:08:28 crc kubenswrapper[4828]: I1205 19:08:28.430331 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 05 19:08:28 crc kubenswrapper[4828]: I1205 19:08:28.526856 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 05 19:08:28 crc kubenswrapper[4828]: I1205 19:08:28.585962 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 05 19:08:28 crc kubenswrapper[4828]: I1205 19:08:28.609665 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 05 19:08:28 crc kubenswrapper[4828]: I1205 19:08:28.648271 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 05 19:08:28 crc kubenswrapper[4828]: I1205 19:08:28.656151 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 05 19:08:28 crc kubenswrapper[4828]: I1205 19:08:28.671977 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 05 19:08:28 crc kubenswrapper[4828]: I1205 19:08:28.706731 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 05 19:08:28 crc kubenswrapper[4828]: I1205 19:08:28.710872 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 05 19:08:28 crc kubenswrapper[4828]: I1205 19:08:28.721300 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 05 19:08:28 crc kubenswrapper[4828]: I1205 19:08:28.735681 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 05 19:08:28 crc kubenswrapper[4828]: I1205 19:08:28.921184 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 05 19:08:28 crc kubenswrapper[4828]: I1205 19:08:28.942616 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 05 19:08:29 crc kubenswrapper[4828]: I1205 19:08:29.019374 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 05 19:08:29 crc kubenswrapper[4828]: I1205 19:08:29.034188 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 05 19:08:29 crc kubenswrapper[4828]: I1205 19:08:29.084353 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 05 19:08:29 crc kubenswrapper[4828]: I1205 19:08:29.084495 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 05 19:08:29 crc kubenswrapper[4828]: I1205 19:08:29.087025 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 05 19:08:29 crc kubenswrapper[4828]: I1205 19:08:29.287995 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 05 19:08:29 crc kubenswrapper[4828]: I1205 19:08:29.317082 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 05 19:08:29 crc kubenswrapper[4828]: I1205 19:08:29.377551 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 05 19:08:29 crc kubenswrapper[4828]: I1205 19:08:29.646379 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 05 19:08:29 crc kubenswrapper[4828]: I1205 19:08:29.667809 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 05 19:08:29 crc kubenswrapper[4828]: I1205 19:08:29.668439 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 05 19:08:29 crc kubenswrapper[4828]: I1205 19:08:29.703130 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 05 19:08:29 crc kubenswrapper[4828]: I1205 19:08:29.733321 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 05 19:08:29 crc kubenswrapper[4828]: I1205 19:08:29.739464 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 05 19:08:29 crc kubenswrapper[4828]: I1205 19:08:29.759947 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 05 19:08:29 crc kubenswrapper[4828]: I1205 19:08:29.762976 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 05 19:08:29 crc kubenswrapper[4828]: I1205 19:08:29.872202 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 05 19:08:29 crc kubenswrapper[4828]: I1205 19:08:29.906207 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 05 19:08:29 crc kubenswrapper[4828]: I1205 19:08:29.913132 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 05 19:08:29 crc kubenswrapper[4828]: I1205 19:08:29.965101 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 05 19:08:29 crc kubenswrapper[4828]: I1205 19:08:29.973580 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 05 19:08:30 crc kubenswrapper[4828]: I1205 19:08:30.007099 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 05 19:08:30 crc kubenswrapper[4828]: I1205 19:08:30.020898 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 05 19:08:30 crc kubenswrapper[4828]: I1205 19:08:30.087431 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 05 19:08:30 crc kubenswrapper[4828]: I1205 19:08:30.114372 4828 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 05 19:08:30 crc kubenswrapper[4828]: I1205 19:08:30.126505 4828 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 05 19:08:30 crc kubenswrapper[4828]: I1205 19:08:30.127298 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=45.127284033 podStartE2EDuration="45.127284033s" podCreationTimestamp="2025-12-05 19:07:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:08:07.337960882 +0000 UTC m=+265.233183198" watchObservedRunningTime="2025-12-05 19:08:30.127284033 +0000 UTC m=+288.022506349" Dec 05 19:08:30 crc kubenswrapper[4828]: I1205 19:08:30.131302 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b6nk4","openshift-marketplace/community-operators-fk7dk","openshift-kube-apiserver/kube-apiserver-crc"] Dec 05 19:08:30 crc kubenswrapper[4828]: I1205 19:08:30.131365 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 05 19:08:30 crc kubenswrapper[4828]: I1205 19:08:30.136625 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 19:08:30 crc kubenswrapper[4828]: I1205 19:08:30.137016 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 05 19:08:30 crc kubenswrapper[4828]: I1205 19:08:30.152177 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=23.152150637 podStartE2EDuration="23.152150637s" podCreationTimestamp="2025-12-05 19:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:08:30.151815538 +0000 UTC m=+288.047037844" watchObservedRunningTime="2025-12-05 19:08:30.152150637 +0000 UTC m=+288.047372983" Dec 05 19:08:30 crc kubenswrapper[4828]: I1205 19:08:30.188538 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 05 19:08:30 crc kubenswrapper[4828]: I1205 19:08:30.189860 4828 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 05 19:08:30 crc kubenswrapper[4828]: I1205 19:08:30.245545 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 05 19:08:30 crc kubenswrapper[4828]: I1205 19:08:30.335889 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 05 19:08:30 crc kubenswrapper[4828]: I1205 19:08:30.337042 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 05 19:08:30 crc kubenswrapper[4828]: I1205 19:08:30.452297 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 05 19:08:30 crc kubenswrapper[4828]: I1205 19:08:30.456103 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" path="/var/lib/kubelet/pods/10ab3b3d-1c03-4d60-8a16-c34b4a313e7b/volumes" Dec 05 19:08:30 crc kubenswrapper[4828]: I1205 19:08:30.456979 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5b5fb60-4709-4e6c-b9a6-ba869094f1e5" path="/var/lib/kubelet/pods/d5b5fb60-4709-4e6c-b9a6-ba869094f1e5/volumes" Dec 05 19:08:30 crc kubenswrapper[4828]: I1205 19:08:30.462677 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 05 19:08:30 crc kubenswrapper[4828]: I1205 19:08:30.527400 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 05 19:08:30 crc kubenswrapper[4828]: I1205 19:08:30.552523 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 05 19:08:30 crc kubenswrapper[4828]: I1205 19:08:30.590588 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 05 19:08:30 crc kubenswrapper[4828]: I1205 19:08:30.646200 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 05 19:08:30 crc kubenswrapper[4828]: I1205 19:08:30.646966 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 05 19:08:30 crc kubenswrapper[4828]: I1205 19:08:30.658149 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 05 19:08:30 crc kubenswrapper[4828]: I1205 19:08:30.868241 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 05 19:08:30 crc kubenswrapper[4828]: I1205 19:08:30.917970 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 05 19:08:30 crc kubenswrapper[4828]: I1205 19:08:30.918814 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 05 19:08:30 crc kubenswrapper[4828]: I1205 19:08:30.950095 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 05 19:08:31 crc kubenswrapper[4828]: I1205 19:08:31.088155 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 05 19:08:31 crc kubenswrapper[4828]: I1205 19:08:31.331947 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 05 19:08:31 crc kubenswrapper[4828]: I1205 19:08:31.391869 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 05 19:08:31 crc kubenswrapper[4828]: I1205 19:08:31.468060 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 05 19:08:31 crc kubenswrapper[4828]: I1205 19:08:31.691397 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 05 19:08:31 crc kubenswrapper[4828]: I1205 19:08:31.820232 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 05 19:08:31 crc kubenswrapper[4828]: I1205 19:08:31.869307 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 05 19:08:32 crc kubenswrapper[4828]: I1205 19:08:32.200305 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 05 19:08:32 crc kubenswrapper[4828]: I1205 19:08:32.273714 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 05 19:08:32 crc kubenswrapper[4828]: I1205 19:08:32.294117 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 05 19:08:32 crc kubenswrapper[4828]: I1205 19:08:32.330939 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 05 19:08:32 crc kubenswrapper[4828]: I1205 19:08:32.540435 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 05 19:08:32 crc kubenswrapper[4828]: I1205 19:08:32.564029 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 05 19:08:32 crc kubenswrapper[4828]: I1205 19:08:32.692126 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 05 19:08:32 crc kubenswrapper[4828]: I1205 19:08:32.702272 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 05 19:08:32 crc kubenswrapper[4828]: I1205 19:08:32.859132 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.049140 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.240717 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.258347 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.310245 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.311152 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.415655 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.458998 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.495175 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.546582 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz"] Dec 05 19:08:33 crc kubenswrapper[4828]: E1205 19:08:33.547087 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5b5fb60-4709-4e6c-b9a6-ba869094f1e5" containerName="oauth-openshift" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.547118 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5b5fb60-4709-4e6c-b9a6-ba869094f1e5" containerName="oauth-openshift" Dec 05 19:08:33 crc kubenswrapper[4828]: E1205 19:08:33.547153 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" containerName="registry-server" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.547170 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" containerName="registry-server" Dec 05 19:08:33 crc kubenswrapper[4828]: E1205 19:08:33.547207 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" containerName="extract-content" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.547224 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" containerName="extract-content" Dec 05 19:08:33 crc kubenswrapper[4828]: E1205 19:08:33.547243 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" containerName="extract-utilities" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.547262 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" containerName="extract-utilities" Dec 05 19:08:33 crc kubenswrapper[4828]: E1205 19:08:33.547296 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd5b1db3-f574-4ff6-9160-f7daf0564b25" containerName="installer" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.547314 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd5b1db3-f574-4ff6-9160-f7daf0564b25" containerName="installer" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.547550 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd5b1db3-f574-4ff6-9160-f7daf0564b25" containerName="installer" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.547595 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="10ab3b3d-1c03-4d60-8a16-c34b4a313e7b" containerName="registry-server" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.547623 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5b5fb60-4709-4e6c-b9a6-ba869094f1e5" containerName="oauth-openshift" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.549287 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.556172 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.556211 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.556480 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.556559 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.556593 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.556649 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.556495 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.557213 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.558819 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.563423 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.563570 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.564193 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.573568 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz"] Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.581199 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.589045 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.595057 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.671907 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcq99\" (UniqueName: \"kubernetes.io/projected/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-kube-api-access-mcq99\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.672185 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.672242 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.672659 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-user-template-login\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.672785 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-audit-policies\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.672889 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.672978 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-system-service-ca\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.673028 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-system-session\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.673076 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.673155 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-user-template-error\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.673267 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.673348 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-system-router-certs\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.673399 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.673453 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-audit-dir\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.774618 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-system-session\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.774713 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.774793 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-user-template-error\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.774918 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.774990 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-system-router-certs\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.775048 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.775104 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-audit-dir\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.775189 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcq99\" (UniqueName: \"kubernetes.io/projected/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-kube-api-access-mcq99\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.775245 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.775314 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.775380 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-user-template-login\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.775437 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-audit-policies\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.775494 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.775561 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-system-service-ca\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.775709 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-audit-dir\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.777129 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.777804 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-audit-policies\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.778173 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-system-service-ca\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.778605 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.784414 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-system-session\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.784486 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.784563 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-system-router-certs\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.786024 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.786784 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-user-template-error\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.787412 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.788316 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-user-template-login\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.788367 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.800819 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcq99\" (UniqueName: \"kubernetes.io/projected/6fae31cb-b7d6-4a9b-b930-0e791bc7f395-kube-api-access-mcq99\") pod \"oauth-openshift-7c7b56dd96-xbbgz\" (UID: \"6fae31cb-b7d6-4a9b-b930-0e791bc7f395\") " pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.869269 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.873354 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.886965 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:33 crc kubenswrapper[4828]: I1205 19:08:33.984470 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 05 19:08:34 crc kubenswrapper[4828]: I1205 19:08:34.008737 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 05 19:08:34 crc kubenswrapper[4828]: I1205 19:08:34.071142 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz"] Dec 05 19:08:34 crc kubenswrapper[4828]: I1205 19:08:34.418595 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" event={"ID":"6fae31cb-b7d6-4a9b-b930-0e791bc7f395","Type":"ContainerStarted","Data":"0c48cd1fbc9c56eccfc09b442c13ce68ed91bd73258f09a3e64c75bf02695248"} Dec 05 19:08:34 crc kubenswrapper[4828]: I1205 19:08:34.418671 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" event={"ID":"6fae31cb-b7d6-4a9b-b930-0e791bc7f395","Type":"ContainerStarted","Data":"36b1768ffdd68ccc6cd31013d391997972148d648846bef9b780d40e07814be0"} Dec 05 19:08:34 crc kubenswrapper[4828]: I1205 19:08:34.440401 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" podStartSLOduration=66.440378108 podStartE2EDuration="1m6.440378108s" podCreationTimestamp="2025-12-05 19:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:08:34.434807707 +0000 UTC m=+292.330030023" watchObservedRunningTime="2025-12-05 19:08:34.440378108 +0000 UTC m=+292.335600424" Dec 05 19:08:34 crc kubenswrapper[4828]: I1205 19:08:34.927344 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 05 19:08:35 crc kubenswrapper[4828]: I1205 19:08:35.079106 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 05 19:08:35 crc kubenswrapper[4828]: I1205 19:08:35.139091 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 05 19:08:35 crc kubenswrapper[4828]: I1205 19:08:35.425217 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-7c7b56dd96-xbbgz_6fae31cb-b7d6-4a9b-b930-0e791bc7f395/oauth-openshift/0.log" Dec 05 19:08:35 crc kubenswrapper[4828]: I1205 19:08:35.425304 4828 generic.go:334] "Generic (PLEG): container finished" podID="6fae31cb-b7d6-4a9b-b930-0e791bc7f395" containerID="0c48cd1fbc9c56eccfc09b442c13ce68ed91bd73258f09a3e64c75bf02695248" exitCode=255 Dec 05 19:08:35 crc kubenswrapper[4828]: I1205 19:08:35.425365 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" event={"ID":"6fae31cb-b7d6-4a9b-b930-0e791bc7f395","Type":"ContainerDied","Data":"0c48cd1fbc9c56eccfc09b442c13ce68ed91bd73258f09a3e64c75bf02695248"} Dec 05 19:08:35 crc kubenswrapper[4828]: I1205 19:08:35.426033 4828 scope.go:117] "RemoveContainer" containerID="0c48cd1fbc9c56eccfc09b442c13ce68ed91bd73258f09a3e64c75bf02695248" Dec 05 19:08:35 crc kubenswrapper[4828]: I1205 19:08:35.731588 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 05 19:08:35 crc kubenswrapper[4828]: I1205 19:08:35.796349 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 05 19:08:35 crc kubenswrapper[4828]: I1205 19:08:35.972566 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 05 19:08:36 crc kubenswrapper[4828]: I1205 19:08:36.218567 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 05 19:08:36 crc kubenswrapper[4828]: I1205 19:08:36.432838 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-7c7b56dd96-xbbgz_6fae31cb-b7d6-4a9b-b930-0e791bc7f395/oauth-openshift/0.log" Dec 05 19:08:36 crc kubenswrapper[4828]: I1205 19:08:36.433135 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" event={"ID":"6fae31cb-b7d6-4a9b-b930-0e791bc7f395","Type":"ContainerStarted","Data":"0179f86911d07e03ab603384bf4f90e10157e3ccf019ee5b75d4f42daf580dae"} Dec 05 19:08:36 crc kubenswrapper[4828]: I1205 19:08:36.433537 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:36 crc kubenswrapper[4828]: I1205 19:08:36.438232 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7c7b56dd96-xbbgz" Dec 05 19:08:36 crc kubenswrapper[4828]: I1205 19:08:36.857099 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 05 19:08:40 crc kubenswrapper[4828]: I1205 19:08:40.809339 4828 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 05 19:08:40 crc kubenswrapper[4828]: I1205 19:08:40.809887 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://4d7d8dcc0c484850a2f06711a5cebe6eba546e924d7e8d837274c471981e284b" gracePeriod=5 Dec 05 19:08:46 crc kubenswrapper[4828]: I1205 19:08:46.372652 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 05 19:08:46 crc kubenswrapper[4828]: I1205 19:08:46.374399 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 19:08:46 crc kubenswrapper[4828]: I1205 19:08:46.444185 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 05 19:08:46 crc kubenswrapper[4828]: I1205 19:08:46.444227 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 05 19:08:46 crc kubenswrapper[4828]: I1205 19:08:46.444251 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 05 19:08:46 crc kubenswrapper[4828]: I1205 19:08:46.444266 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 05 19:08:46 crc kubenswrapper[4828]: I1205 19:08:46.444295 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:08:46 crc kubenswrapper[4828]: I1205 19:08:46.444302 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 05 19:08:46 crc kubenswrapper[4828]: I1205 19:08:46.444326 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:08:46 crc kubenswrapper[4828]: I1205 19:08:46.444350 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:08:46 crc kubenswrapper[4828]: I1205 19:08:46.444549 4828 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 05 19:08:46 crc kubenswrapper[4828]: I1205 19:08:46.444559 4828 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 05 19:08:46 crc kubenswrapper[4828]: I1205 19:08:46.444567 4828 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 05 19:08:46 crc kubenswrapper[4828]: I1205 19:08:46.444624 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:08:46 crc kubenswrapper[4828]: I1205 19:08:46.451108 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:08:46 crc kubenswrapper[4828]: I1205 19:08:46.455070 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 05 19:08:46 crc kubenswrapper[4828]: I1205 19:08:46.455288 4828 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 05 19:08:46 crc kubenswrapper[4828]: I1205 19:08:46.464143 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 05 19:08:46 crc kubenswrapper[4828]: I1205 19:08:46.464170 4828 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="4e51bacf-af70-4beb-9c48-2616a4a1d700" Dec 05 19:08:46 crc kubenswrapper[4828]: I1205 19:08:46.468098 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 05 19:08:46 crc kubenswrapper[4828]: I1205 19:08:46.468134 4828 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="4e51bacf-af70-4beb-9c48-2616a4a1d700" Dec 05 19:08:46 crc kubenswrapper[4828]: I1205 19:08:46.488039 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 05 19:08:46 crc kubenswrapper[4828]: I1205 19:08:46.488096 4828 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="4d7d8dcc0c484850a2f06711a5cebe6eba546e924d7e8d837274c471981e284b" exitCode=137 Dec 05 19:08:46 crc kubenswrapper[4828]: I1205 19:08:46.488142 4828 scope.go:117] "RemoveContainer" containerID="4d7d8dcc0c484850a2f06711a5cebe6eba546e924d7e8d837274c471981e284b" Dec 05 19:08:46 crc kubenswrapper[4828]: I1205 19:08:46.488240 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 19:08:46 crc kubenswrapper[4828]: I1205 19:08:46.506476 4828 scope.go:117] "RemoveContainer" containerID="4d7d8dcc0c484850a2f06711a5cebe6eba546e924d7e8d837274c471981e284b" Dec 05 19:08:46 crc kubenswrapper[4828]: E1205 19:08:46.507070 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d7d8dcc0c484850a2f06711a5cebe6eba546e924d7e8d837274c471981e284b\": container with ID starting with 4d7d8dcc0c484850a2f06711a5cebe6eba546e924d7e8d837274c471981e284b not found: ID does not exist" containerID="4d7d8dcc0c484850a2f06711a5cebe6eba546e924d7e8d837274c471981e284b" Dec 05 19:08:46 crc kubenswrapper[4828]: I1205 19:08:46.507126 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d7d8dcc0c484850a2f06711a5cebe6eba546e924d7e8d837274c471981e284b"} err="failed to get container status \"4d7d8dcc0c484850a2f06711a5cebe6eba546e924d7e8d837274c471981e284b\": rpc error: code = NotFound desc = could not find container \"4d7d8dcc0c484850a2f06711a5cebe6eba546e924d7e8d837274c471981e284b\": container with ID starting with 4d7d8dcc0c484850a2f06711a5cebe6eba546e924d7e8d837274c471981e284b not found: ID does not exist" Dec 05 19:08:46 crc kubenswrapper[4828]: I1205 19:08:46.545359 4828 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 05 19:08:46 crc kubenswrapper[4828]: I1205 19:08:46.545401 4828 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.342754 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m957x"] Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.343560 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-m957x" podUID="2cf96d2c-9865-437a-a87c-63ca051a421d" containerName="controller-manager" containerID="cri-o://20b7986cd9a4d8641eec30e9c5edcf628497e3657741e74ea63fdcd3a099fe82" gracePeriod=30 Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.436453 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx"] Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.436674 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx" podUID="0b2ef96f-044d-4b5e-97b8-9e413bc37088" containerName="route-controller-manager" containerID="cri-o://ee36bab915719571c53f14a6366c41121b8f4e0479b67d4faaf27783fa79b1a7" gracePeriod=30 Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.714099 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-m957x" Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.765024 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx" event={"ID":"0b2ef96f-044d-4b5e-97b8-9e413bc37088","Type":"ContainerDied","Data":"ee36bab915719571c53f14a6366c41121b8f4e0479b67d4faaf27783fa79b1a7"} Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.765022 4828 generic.go:334] "Generic (PLEG): container finished" podID="0b2ef96f-044d-4b5e-97b8-9e413bc37088" containerID="ee36bab915719571c53f14a6366c41121b8f4e0479b67d4faaf27783fa79b1a7" exitCode=0 Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.773447 4828 generic.go:334] "Generic (PLEG): container finished" podID="2cf96d2c-9865-437a-a87c-63ca051a421d" containerID="20b7986cd9a4d8641eec30e9c5edcf628497e3657741e74ea63fdcd3a099fe82" exitCode=0 Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.773484 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-m957x" event={"ID":"2cf96d2c-9865-437a-a87c-63ca051a421d","Type":"ContainerDied","Data":"20b7986cd9a4d8641eec30e9c5edcf628497e3657741e74ea63fdcd3a099fe82"} Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.773508 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-m957x" event={"ID":"2cf96d2c-9865-437a-a87c-63ca051a421d","Type":"ContainerDied","Data":"b768650f72009108bbc68a5e31152e9c73dad5ed5687d5a097a44cf6237dd18c"} Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.773523 4828 scope.go:117] "RemoveContainer" containerID="20b7986cd9a4d8641eec30e9c5edcf628497e3657741e74ea63fdcd3a099fe82" Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.773630 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-m957x" Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.796260 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdf8b\" (UniqueName: \"kubernetes.io/projected/2cf96d2c-9865-437a-a87c-63ca051a421d-kube-api-access-sdf8b\") pod \"2cf96d2c-9865-437a-a87c-63ca051a421d\" (UID: \"2cf96d2c-9865-437a-a87c-63ca051a421d\") " Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.796343 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cf96d2c-9865-437a-a87c-63ca051a421d-serving-cert\") pod \"2cf96d2c-9865-437a-a87c-63ca051a421d\" (UID: \"2cf96d2c-9865-437a-a87c-63ca051a421d\") " Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.796362 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf96d2c-9865-437a-a87c-63ca051a421d-config\") pod \"2cf96d2c-9865-437a-a87c-63ca051a421d\" (UID: \"2cf96d2c-9865-437a-a87c-63ca051a421d\") " Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.796379 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2cf96d2c-9865-437a-a87c-63ca051a421d-proxy-ca-bundles\") pod \"2cf96d2c-9865-437a-a87c-63ca051a421d\" (UID: \"2cf96d2c-9865-437a-a87c-63ca051a421d\") " Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.796403 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cf96d2c-9865-437a-a87c-63ca051a421d-client-ca\") pod \"2cf96d2c-9865-437a-a87c-63ca051a421d\" (UID: \"2cf96d2c-9865-437a-a87c-63ca051a421d\") " Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.796560 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx" Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.797139 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf96d2c-9865-437a-a87c-63ca051a421d-client-ca" (OuterVolumeSpecName: "client-ca") pod "2cf96d2c-9865-437a-a87c-63ca051a421d" (UID: "2cf96d2c-9865-437a-a87c-63ca051a421d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.797466 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf96d2c-9865-437a-a87c-63ca051a421d-config" (OuterVolumeSpecName: "config") pod "2cf96d2c-9865-437a-a87c-63ca051a421d" (UID: "2cf96d2c-9865-437a-a87c-63ca051a421d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.797478 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf96d2c-9865-437a-a87c-63ca051a421d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2cf96d2c-9865-437a-a87c-63ca051a421d" (UID: "2cf96d2c-9865-437a-a87c-63ca051a421d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.798033 4828 scope.go:117] "RemoveContainer" containerID="20b7986cd9a4d8641eec30e9c5edcf628497e3657741e74ea63fdcd3a099fe82" Dec 05 19:09:32 crc kubenswrapper[4828]: E1205 19:09:32.800012 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20b7986cd9a4d8641eec30e9c5edcf628497e3657741e74ea63fdcd3a099fe82\": container with ID starting with 20b7986cd9a4d8641eec30e9c5edcf628497e3657741e74ea63fdcd3a099fe82 not found: ID does not exist" containerID="20b7986cd9a4d8641eec30e9c5edcf628497e3657741e74ea63fdcd3a099fe82" Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.800052 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20b7986cd9a4d8641eec30e9c5edcf628497e3657741e74ea63fdcd3a099fe82"} err="failed to get container status \"20b7986cd9a4d8641eec30e9c5edcf628497e3657741e74ea63fdcd3a099fe82\": rpc error: code = NotFound desc = could not find container \"20b7986cd9a4d8641eec30e9c5edcf628497e3657741e74ea63fdcd3a099fe82\": container with ID starting with 20b7986cd9a4d8641eec30e9c5edcf628497e3657741e74ea63fdcd3a099fe82 not found: ID does not exist" Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.802157 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cf96d2c-9865-437a-a87c-63ca051a421d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2cf96d2c-9865-437a-a87c-63ca051a421d" (UID: "2cf96d2c-9865-437a-a87c-63ca051a421d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.802408 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cf96d2c-9865-437a-a87c-63ca051a421d-kube-api-access-sdf8b" (OuterVolumeSpecName: "kube-api-access-sdf8b") pod "2cf96d2c-9865-437a-a87c-63ca051a421d" (UID: "2cf96d2c-9865-437a-a87c-63ca051a421d"). InnerVolumeSpecName "kube-api-access-sdf8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.898769 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b2ef96f-044d-4b5e-97b8-9e413bc37088-serving-cert\") pod \"0b2ef96f-044d-4b5e-97b8-9e413bc37088\" (UID: \"0b2ef96f-044d-4b5e-97b8-9e413bc37088\") " Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.898892 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b2ef96f-044d-4b5e-97b8-9e413bc37088-config\") pod \"0b2ef96f-044d-4b5e-97b8-9e413bc37088\" (UID: \"0b2ef96f-044d-4b5e-97b8-9e413bc37088\") " Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.898939 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvd4h\" (UniqueName: \"kubernetes.io/projected/0b2ef96f-044d-4b5e-97b8-9e413bc37088-kube-api-access-lvd4h\") pod \"0b2ef96f-044d-4b5e-97b8-9e413bc37088\" (UID: \"0b2ef96f-044d-4b5e-97b8-9e413bc37088\") " Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.898959 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b2ef96f-044d-4b5e-97b8-9e413bc37088-client-ca\") pod \"0b2ef96f-044d-4b5e-97b8-9e413bc37088\" (UID: \"0b2ef96f-044d-4b5e-97b8-9e413bc37088\") " Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.899171 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdf8b\" (UniqueName: \"kubernetes.io/projected/2cf96d2c-9865-437a-a87c-63ca051a421d-kube-api-access-sdf8b\") on node \"crc\" DevicePath \"\"" Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.899182 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cf96d2c-9865-437a-a87c-63ca051a421d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.899193 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf96d2c-9865-437a-a87c-63ca051a421d-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.899201 4828 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2cf96d2c-9865-437a-a87c-63ca051a421d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.899209 4828 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cf96d2c-9865-437a-a87c-63ca051a421d-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.899660 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b2ef96f-044d-4b5e-97b8-9e413bc37088-client-ca" (OuterVolumeSpecName: "client-ca") pod "0b2ef96f-044d-4b5e-97b8-9e413bc37088" (UID: "0b2ef96f-044d-4b5e-97b8-9e413bc37088"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.899742 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b2ef96f-044d-4b5e-97b8-9e413bc37088-config" (OuterVolumeSpecName: "config") pod "0b2ef96f-044d-4b5e-97b8-9e413bc37088" (UID: "0b2ef96f-044d-4b5e-97b8-9e413bc37088"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.902014 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b2ef96f-044d-4b5e-97b8-9e413bc37088-kube-api-access-lvd4h" (OuterVolumeSpecName: "kube-api-access-lvd4h") pod "0b2ef96f-044d-4b5e-97b8-9e413bc37088" (UID: "0b2ef96f-044d-4b5e-97b8-9e413bc37088"). InnerVolumeSpecName "kube-api-access-lvd4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:09:32 crc kubenswrapper[4828]: I1205 19:09:32.903192 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b2ef96f-044d-4b5e-97b8-9e413bc37088-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b2ef96f-044d-4b5e-97b8-9e413bc37088" (UID: "0b2ef96f-044d-4b5e-97b8-9e413bc37088"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:09:33 crc kubenswrapper[4828]: I1205 19:09:33.000267 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvd4h\" (UniqueName: \"kubernetes.io/projected/0b2ef96f-044d-4b5e-97b8-9e413bc37088-kube-api-access-lvd4h\") on node \"crc\" DevicePath \"\"" Dec 05 19:09:33 crc kubenswrapper[4828]: I1205 19:09:33.000310 4828 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b2ef96f-044d-4b5e-97b8-9e413bc37088-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 19:09:33 crc kubenswrapper[4828]: I1205 19:09:33.000328 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b2ef96f-044d-4b5e-97b8-9e413bc37088-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:09:33 crc kubenswrapper[4828]: I1205 19:09:33.000342 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b2ef96f-044d-4b5e-97b8-9e413bc37088-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:09:33 crc kubenswrapper[4828]: I1205 19:09:33.100858 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m957x"] Dec 05 19:09:33 crc kubenswrapper[4828]: I1205 19:09:33.104229 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m957x"] Dec 05 19:09:33 crc kubenswrapper[4828]: I1205 19:09:33.797428 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx" event={"ID":"0b2ef96f-044d-4b5e-97b8-9e413bc37088","Type":"ContainerDied","Data":"ac2b10578ae2cb1f9c339260bc1ac97b89be31b8746aec0c56d7155d0f83015a"} Dec 05 19:09:33 crc kubenswrapper[4828]: I1205 19:09:33.797508 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx" Dec 05 19:09:33 crc kubenswrapper[4828]: I1205 19:09:33.797541 4828 scope.go:117] "RemoveContainer" containerID="ee36bab915719571c53f14a6366c41121b8f4e0479b67d4faaf27783fa79b1a7" Dec 05 19:09:33 crc kubenswrapper[4828]: I1205 19:09:33.841594 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx"] Dec 05 19:09:33 crc kubenswrapper[4828]: I1205 19:09:33.845617 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rtxpx"] Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.469995 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b2ef96f-044d-4b5e-97b8-9e413bc37088" path="/var/lib/kubelet/pods/0b2ef96f-044d-4b5e-97b8-9e413bc37088/volumes" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.476061 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cf96d2c-9865-437a-a87c-63ca051a421d" path="/var/lib/kubelet/pods/2cf96d2c-9865-437a-a87c-63ca051a421d/volumes" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.574840 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7484d9ddcc-vhw9g"] Dec 05 19:09:34 crc kubenswrapper[4828]: E1205 19:09:34.575259 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b2ef96f-044d-4b5e-97b8-9e413bc37088" containerName="route-controller-manager" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.575278 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b2ef96f-044d-4b5e-97b8-9e413bc37088" containerName="route-controller-manager" Dec 05 19:09:34 crc kubenswrapper[4828]: E1205 19:09:34.575296 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.575303 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 05 19:09:34 crc kubenswrapper[4828]: E1205 19:09:34.575310 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf96d2c-9865-437a-a87c-63ca051a421d" containerName="controller-manager" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.575316 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf96d2c-9865-437a-a87c-63ca051a421d" containerName="controller-manager" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.575408 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.575421 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b2ef96f-044d-4b5e-97b8-9e413bc37088" containerName="route-controller-manager" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.575428 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cf96d2c-9865-437a-a87c-63ca051a421d" containerName="controller-manager" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.575837 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-vhw9g" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.577518 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.577522 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.577689 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.578279 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.578503 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.578526 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b85888b7c-tmwml"] Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.578588 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.579125 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b85888b7c-tmwml" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.583189 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.583729 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.583935 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.584123 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.584469 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.584895 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7484d9ddcc-vhw9g"] Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.585073 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.589426 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.590142 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b85888b7c-tmwml"] Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.621126 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ff75179-ff05-4afd-8037-7c3a38535ed0-serving-cert\") pod \"route-controller-manager-7484d9ddcc-vhw9g\" (UID: \"9ff75179-ff05-4afd-8037-7c3a38535ed0\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-vhw9g" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.621193 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6622e706-e3b7-4c7c-923c-6b0ae67b16bb-serving-cert\") pod \"controller-manager-5b85888b7c-tmwml\" (UID: \"6622e706-e3b7-4c7c-923c-6b0ae67b16bb\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-tmwml" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.621225 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6622e706-e3b7-4c7c-923c-6b0ae67b16bb-proxy-ca-bundles\") pod \"controller-manager-5b85888b7c-tmwml\" (UID: \"6622e706-e3b7-4c7c-923c-6b0ae67b16bb\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-tmwml" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.621265 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6622e706-e3b7-4c7c-923c-6b0ae67b16bb-config\") pod \"controller-manager-5b85888b7c-tmwml\" (UID: \"6622e706-e3b7-4c7c-923c-6b0ae67b16bb\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-tmwml" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.621302 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psjn5\" (UniqueName: \"kubernetes.io/projected/9ff75179-ff05-4afd-8037-7c3a38535ed0-kube-api-access-psjn5\") pod \"route-controller-manager-7484d9ddcc-vhw9g\" (UID: \"9ff75179-ff05-4afd-8037-7c3a38535ed0\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-vhw9g" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.621327 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ff75179-ff05-4afd-8037-7c3a38535ed0-client-ca\") pod \"route-controller-manager-7484d9ddcc-vhw9g\" (UID: \"9ff75179-ff05-4afd-8037-7c3a38535ed0\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-vhw9g" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.621349 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ff75179-ff05-4afd-8037-7c3a38535ed0-config\") pod \"route-controller-manager-7484d9ddcc-vhw9g\" (UID: \"9ff75179-ff05-4afd-8037-7c3a38535ed0\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-vhw9g" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.621364 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gntwk\" (UniqueName: \"kubernetes.io/projected/6622e706-e3b7-4c7c-923c-6b0ae67b16bb-kube-api-access-gntwk\") pod \"controller-manager-5b85888b7c-tmwml\" (UID: \"6622e706-e3b7-4c7c-923c-6b0ae67b16bb\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-tmwml" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.621380 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6622e706-e3b7-4c7c-923c-6b0ae67b16bb-client-ca\") pod \"controller-manager-5b85888b7c-tmwml\" (UID: \"6622e706-e3b7-4c7c-923c-6b0ae67b16bb\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-tmwml" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.722681 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6622e706-e3b7-4c7c-923c-6b0ae67b16bb-serving-cert\") pod \"controller-manager-5b85888b7c-tmwml\" (UID: \"6622e706-e3b7-4c7c-923c-6b0ae67b16bb\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-tmwml" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.722737 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6622e706-e3b7-4c7c-923c-6b0ae67b16bb-proxy-ca-bundles\") pod \"controller-manager-5b85888b7c-tmwml\" (UID: \"6622e706-e3b7-4c7c-923c-6b0ae67b16bb\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-tmwml" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.722798 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6622e706-e3b7-4c7c-923c-6b0ae67b16bb-config\") pod \"controller-manager-5b85888b7c-tmwml\" (UID: \"6622e706-e3b7-4c7c-923c-6b0ae67b16bb\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-tmwml" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.722836 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psjn5\" (UniqueName: \"kubernetes.io/projected/9ff75179-ff05-4afd-8037-7c3a38535ed0-kube-api-access-psjn5\") pod \"route-controller-manager-7484d9ddcc-vhw9g\" (UID: \"9ff75179-ff05-4afd-8037-7c3a38535ed0\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-vhw9g" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.722854 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ff75179-ff05-4afd-8037-7c3a38535ed0-client-ca\") pod \"route-controller-manager-7484d9ddcc-vhw9g\" (UID: \"9ff75179-ff05-4afd-8037-7c3a38535ed0\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-vhw9g" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.722876 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ff75179-ff05-4afd-8037-7c3a38535ed0-config\") pod \"route-controller-manager-7484d9ddcc-vhw9g\" (UID: \"9ff75179-ff05-4afd-8037-7c3a38535ed0\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-vhw9g" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.722893 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gntwk\" (UniqueName: \"kubernetes.io/projected/6622e706-e3b7-4c7c-923c-6b0ae67b16bb-kube-api-access-gntwk\") pod \"controller-manager-5b85888b7c-tmwml\" (UID: \"6622e706-e3b7-4c7c-923c-6b0ae67b16bb\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-tmwml" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.722908 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6622e706-e3b7-4c7c-923c-6b0ae67b16bb-client-ca\") pod \"controller-manager-5b85888b7c-tmwml\" (UID: \"6622e706-e3b7-4c7c-923c-6b0ae67b16bb\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-tmwml" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.722942 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ff75179-ff05-4afd-8037-7c3a38535ed0-serving-cert\") pod \"route-controller-manager-7484d9ddcc-vhw9g\" (UID: \"9ff75179-ff05-4afd-8037-7c3a38535ed0\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-vhw9g" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.723840 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ff75179-ff05-4afd-8037-7c3a38535ed0-client-ca\") pod \"route-controller-manager-7484d9ddcc-vhw9g\" (UID: \"9ff75179-ff05-4afd-8037-7c3a38535ed0\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-vhw9g" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.724007 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ff75179-ff05-4afd-8037-7c3a38535ed0-config\") pod \"route-controller-manager-7484d9ddcc-vhw9g\" (UID: \"9ff75179-ff05-4afd-8037-7c3a38535ed0\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-vhw9g" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.724966 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6622e706-e3b7-4c7c-923c-6b0ae67b16bb-proxy-ca-bundles\") pod \"controller-manager-5b85888b7c-tmwml\" (UID: \"6622e706-e3b7-4c7c-923c-6b0ae67b16bb\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-tmwml" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.725500 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6622e706-e3b7-4c7c-923c-6b0ae67b16bb-client-ca\") pod \"controller-manager-5b85888b7c-tmwml\" (UID: \"6622e706-e3b7-4c7c-923c-6b0ae67b16bb\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-tmwml" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.726211 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6622e706-e3b7-4c7c-923c-6b0ae67b16bb-config\") pod \"controller-manager-5b85888b7c-tmwml\" (UID: \"6622e706-e3b7-4c7c-923c-6b0ae67b16bb\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-tmwml" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.727400 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6622e706-e3b7-4c7c-923c-6b0ae67b16bb-serving-cert\") pod \"controller-manager-5b85888b7c-tmwml\" (UID: \"6622e706-e3b7-4c7c-923c-6b0ae67b16bb\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-tmwml" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.732539 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ff75179-ff05-4afd-8037-7c3a38535ed0-serving-cert\") pod \"route-controller-manager-7484d9ddcc-vhw9g\" (UID: \"9ff75179-ff05-4afd-8037-7c3a38535ed0\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-vhw9g" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.744514 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gntwk\" (UniqueName: \"kubernetes.io/projected/6622e706-e3b7-4c7c-923c-6b0ae67b16bb-kube-api-access-gntwk\") pod \"controller-manager-5b85888b7c-tmwml\" (UID: \"6622e706-e3b7-4c7c-923c-6b0ae67b16bb\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-tmwml" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.748219 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psjn5\" (UniqueName: \"kubernetes.io/projected/9ff75179-ff05-4afd-8037-7c3a38535ed0-kube-api-access-psjn5\") pod \"route-controller-manager-7484d9ddcc-vhw9g\" (UID: \"9ff75179-ff05-4afd-8037-7c3a38535ed0\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-vhw9g" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.894253 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-vhw9g" Dec 05 19:09:34 crc kubenswrapper[4828]: I1205 19:09:34.905229 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b85888b7c-tmwml" Dec 05 19:09:35 crc kubenswrapper[4828]: I1205 19:09:35.131740 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7484d9ddcc-vhw9g"] Dec 05 19:09:35 crc kubenswrapper[4828]: I1205 19:09:35.165101 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b85888b7c-tmwml"] Dec 05 19:09:35 crc kubenswrapper[4828]: I1205 19:09:35.259850 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:09:35 crc kubenswrapper[4828]: I1205 19:09:35.259897 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:09:35 crc kubenswrapper[4828]: I1205 19:09:35.810531 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-vhw9g" event={"ID":"9ff75179-ff05-4afd-8037-7c3a38535ed0","Type":"ContainerStarted","Data":"f58d271a76c7343c4732044372bb3988108eae100dae1bdc746e4db74151c645"} Dec 05 19:09:35 crc kubenswrapper[4828]: I1205 19:09:35.810928 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-vhw9g" event={"ID":"9ff75179-ff05-4afd-8037-7c3a38535ed0","Type":"ContainerStarted","Data":"34dd4a232f6d8ec1a965cbc580a3e0e8b85df246c3570126b3ea77f9aee0ee1f"} Dec 05 19:09:35 crc kubenswrapper[4828]: I1205 19:09:35.810943 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-vhw9g" Dec 05 19:09:35 crc kubenswrapper[4828]: I1205 19:09:35.812231 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b85888b7c-tmwml" event={"ID":"6622e706-e3b7-4c7c-923c-6b0ae67b16bb","Type":"ContainerStarted","Data":"37b4d8e62b5fb2c1b122d3caa56682705429ed0ab53d41440cdd8cd47a57d7a8"} Dec 05 19:09:35 crc kubenswrapper[4828]: I1205 19:09:35.812275 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b85888b7c-tmwml" event={"ID":"6622e706-e3b7-4c7c-923c-6b0ae67b16bb","Type":"ContainerStarted","Data":"b3c87809170daac68f2c94a067a0b5eae14ff5c9531ac6e70cab801991b9097c"} Dec 05 19:09:35 crc kubenswrapper[4828]: I1205 19:09:35.812464 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5b85888b7c-tmwml" Dec 05 19:09:35 crc kubenswrapper[4828]: I1205 19:09:35.816592 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-vhw9g" Dec 05 19:09:35 crc kubenswrapper[4828]: I1205 19:09:35.816838 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b85888b7c-tmwml" Dec 05 19:09:35 crc kubenswrapper[4828]: I1205 19:09:35.828180 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-vhw9g" podStartSLOduration=3.828161538 podStartE2EDuration="3.828161538s" podCreationTimestamp="2025-12-05 19:09:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:09:35.826171846 +0000 UTC m=+353.721394152" watchObservedRunningTime="2025-12-05 19:09:35.828161538 +0000 UTC m=+353.723383844" Dec 05 19:09:35 crc kubenswrapper[4828]: I1205 19:09:35.865039 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b85888b7c-tmwml" podStartSLOduration=3.865022334 podStartE2EDuration="3.865022334s" podCreationTimestamp="2025-12-05 19:09:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:09:35.847728431 +0000 UTC m=+353.742950737" watchObservedRunningTime="2025-12-05 19:09:35.865022334 +0000 UTC m=+353.760244640" Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.553526 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k624z"] Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.555280 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k624z" podUID="16299781-5338-4577-8a9a-2ec82c3b25b8" containerName="registry-server" containerID="cri-o://45b09ef07805ccf43451b458b681fad92428b2328c6af786da113af11e8931cb" gracePeriod=30 Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.567449 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l5xtb"] Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.567709 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l5xtb" podUID="2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef" containerName="registry-server" containerID="cri-o://c152bceb49b0bb6d14cc008eeb74593807d909e3e26b7eabca34393bc1fe629e" gracePeriod=30 Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.572275 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hcgdz"] Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.572519 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-hcgdz" podUID="ed1e0a7a-7a77-4343-8c33-e921e149ddab" containerName="marketplace-operator" containerID="cri-o://873f186cc202eaabfdbf10f25cf53bb03fa939649f9f945fe93e179d4b3f7283" gracePeriod=30 Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.597022 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9zlnj"] Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.597276 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9zlnj" podUID="ee23b6fa-d318-49d6-91fb-1dacad01ad5f" containerName="registry-server" containerID="cri-o://5e4dc0a310c58d6631428726c53070cfe16bf2b5ddd36c37cde902c6d2beebfe" gracePeriod=30 Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.601199 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v2zjd"] Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.601443 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v2zjd" podUID="a8f1d24b-86b2-4b82-a397-85027c6090f0" containerName="registry-server" containerID="cri-o://42ead5e6551c1bf1bcc7df4ae1e92528a1263f35664301d308a7bc7f938afaa9" gracePeriod=30 Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.604770 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9dx6f"] Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.605505 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9dx6f" Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.611046 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9dx6f"] Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.803155 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57b2cfb8-3ff2-4192-a272-2d6ef4ced1cd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9dx6f\" (UID: \"57b2cfb8-3ff2-4192-a272-2d6ef4ced1cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-9dx6f" Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.803606 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/57b2cfb8-3ff2-4192-a272-2d6ef4ced1cd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9dx6f\" (UID: \"57b2cfb8-3ff2-4192-a272-2d6ef4ced1cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-9dx6f" Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.803647 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpnnd\" (UniqueName: \"kubernetes.io/projected/57b2cfb8-3ff2-4192-a272-2d6ef4ced1cd-kube-api-access-lpnnd\") pod \"marketplace-operator-79b997595-9dx6f\" (UID: \"57b2cfb8-3ff2-4192-a272-2d6ef4ced1cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-9dx6f" Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.904168 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57b2cfb8-3ff2-4192-a272-2d6ef4ced1cd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9dx6f\" (UID: \"57b2cfb8-3ff2-4192-a272-2d6ef4ced1cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-9dx6f" Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.904366 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/57b2cfb8-3ff2-4192-a272-2d6ef4ced1cd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9dx6f\" (UID: \"57b2cfb8-3ff2-4192-a272-2d6ef4ced1cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-9dx6f" Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.904390 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpnnd\" (UniqueName: \"kubernetes.io/projected/57b2cfb8-3ff2-4192-a272-2d6ef4ced1cd-kube-api-access-lpnnd\") pod \"marketplace-operator-79b997595-9dx6f\" (UID: \"57b2cfb8-3ff2-4192-a272-2d6ef4ced1cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-9dx6f" Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.905312 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57b2cfb8-3ff2-4192-a272-2d6ef4ced1cd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9dx6f\" (UID: \"57b2cfb8-3ff2-4192-a272-2d6ef4ced1cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-9dx6f" Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.911054 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/57b2cfb8-3ff2-4192-a272-2d6ef4ced1cd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9dx6f\" (UID: \"57b2cfb8-3ff2-4192-a272-2d6ef4ced1cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-9dx6f" Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.921738 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpnnd\" (UniqueName: \"kubernetes.io/projected/57b2cfb8-3ff2-4192-a272-2d6ef4ced1cd-kube-api-access-lpnnd\") pod \"marketplace-operator-79b997595-9dx6f\" (UID: \"57b2cfb8-3ff2-4192-a272-2d6ef4ced1cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-9dx6f" Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.931224 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9dx6f" Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.992564 4828 generic.go:334] "Generic (PLEG): container finished" podID="ed1e0a7a-7a77-4343-8c33-e921e149ddab" containerID="873f186cc202eaabfdbf10f25cf53bb03fa939649f9f945fe93e179d4b3f7283" exitCode=0 Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.992636 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hcgdz" event={"ID":"ed1e0a7a-7a77-4343-8c33-e921e149ddab","Type":"ContainerDied","Data":"873f186cc202eaabfdbf10f25cf53bb03fa939649f9f945fe93e179d4b3f7283"} Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.995274 4828 generic.go:334] "Generic (PLEG): container finished" podID="16299781-5338-4577-8a9a-2ec82c3b25b8" containerID="45b09ef07805ccf43451b458b681fad92428b2328c6af786da113af11e8931cb" exitCode=0 Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.995297 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k624z" event={"ID":"16299781-5338-4577-8a9a-2ec82c3b25b8","Type":"ContainerDied","Data":"45b09ef07805ccf43451b458b681fad92428b2328c6af786da113af11e8931cb"} Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.997225 4828 generic.go:334] "Generic (PLEG): container finished" podID="a8f1d24b-86b2-4b82-a397-85027c6090f0" containerID="42ead5e6551c1bf1bcc7df4ae1e92528a1263f35664301d308a7bc7f938afaa9" exitCode=0 Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.997263 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v2zjd" event={"ID":"a8f1d24b-86b2-4b82-a397-85027c6090f0","Type":"ContainerDied","Data":"42ead5e6551c1bf1bcc7df4ae1e92528a1263f35664301d308a7bc7f938afaa9"} Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.998863 4828 generic.go:334] "Generic (PLEG): container finished" podID="2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef" containerID="c152bceb49b0bb6d14cc008eeb74593807d909e3e26b7eabca34393bc1fe629e" exitCode=0 Dec 05 19:10:04 crc kubenswrapper[4828]: I1205 19:10:04.999008 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5xtb" event={"ID":"2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef","Type":"ContainerDied","Data":"c152bceb49b0bb6d14cc008eeb74593807d909e3e26b7eabca34393bc1fe629e"} Dec 05 19:10:05 crc kubenswrapper[4828]: I1205 19:10:05.000430 4828 generic.go:334] "Generic (PLEG): container finished" podID="ee23b6fa-d318-49d6-91fb-1dacad01ad5f" containerID="5e4dc0a310c58d6631428726c53070cfe16bf2b5ddd36c37cde902c6d2beebfe" exitCode=0 Dec 05 19:10:05 crc kubenswrapper[4828]: I1205 19:10:05.000536 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zlnj" event={"ID":"ee23b6fa-d318-49d6-91fb-1dacad01ad5f","Type":"ContainerDied","Data":"5e4dc0a310c58d6631428726c53070cfe16bf2b5ddd36c37cde902c6d2beebfe"} Dec 05 19:10:05 crc kubenswrapper[4828]: I1205 19:10:05.260273 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:10:05 crc kubenswrapper[4828]: I1205 19:10:05.260634 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:10:05 crc kubenswrapper[4828]: E1205 19:10:05.286744 4828 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 45b09ef07805ccf43451b458b681fad92428b2328c6af786da113af11e8931cb is running failed: container process not found" containerID="45b09ef07805ccf43451b458b681fad92428b2328c6af786da113af11e8931cb" cmd=["grpc_health_probe","-addr=:50051"] Dec 05 19:10:05 crc kubenswrapper[4828]: E1205 19:10:05.287396 4828 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 45b09ef07805ccf43451b458b681fad92428b2328c6af786da113af11e8931cb is running failed: container process not found" containerID="45b09ef07805ccf43451b458b681fad92428b2328c6af786da113af11e8931cb" cmd=["grpc_health_probe","-addr=:50051"] Dec 05 19:10:05 crc kubenswrapper[4828]: E1205 19:10:05.287689 4828 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 45b09ef07805ccf43451b458b681fad92428b2328c6af786da113af11e8931cb is running failed: container process not found" containerID="45b09ef07805ccf43451b458b681fad92428b2328c6af786da113af11e8931cb" cmd=["grpc_health_probe","-addr=:50051"] Dec 05 19:10:05 crc kubenswrapper[4828]: E1205 19:10:05.287717 4828 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 45b09ef07805ccf43451b458b681fad92428b2328c6af786da113af11e8931cb is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-k624z" podUID="16299781-5338-4577-8a9a-2ec82c3b25b8" containerName="registry-server" Dec 05 19:10:05 crc kubenswrapper[4828]: I1205 19:10:05.332171 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9dx6f"] Dec 05 19:10:05 crc kubenswrapper[4828]: W1205 19:10:05.346643 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57b2cfb8_3ff2_4192_a272_2d6ef4ced1cd.slice/crio-e38963afa92ab5ba453e807388e1760b43411abe2738ce3d747945fbc6f5fef8 WatchSource:0}: Error finding container e38963afa92ab5ba453e807388e1760b43411abe2738ce3d747945fbc6f5fef8: Status 404 returned error can't find the container with id e38963afa92ab5ba453e807388e1760b43411abe2738ce3d747945fbc6f5fef8 Dec 05 19:10:05 crc kubenswrapper[4828]: E1205 19:10:05.353051 4828 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c152bceb49b0bb6d14cc008eeb74593807d909e3e26b7eabca34393bc1fe629e is running failed: container process not found" containerID="c152bceb49b0bb6d14cc008eeb74593807d909e3e26b7eabca34393bc1fe629e" cmd=["grpc_health_probe","-addr=:50051"] Dec 05 19:10:05 crc kubenswrapper[4828]: E1205 19:10:05.353537 4828 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c152bceb49b0bb6d14cc008eeb74593807d909e3e26b7eabca34393bc1fe629e is running failed: container process not found" containerID="c152bceb49b0bb6d14cc008eeb74593807d909e3e26b7eabca34393bc1fe629e" cmd=["grpc_health_probe","-addr=:50051"] Dec 05 19:10:05 crc kubenswrapper[4828]: E1205 19:10:05.354084 4828 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c152bceb49b0bb6d14cc008eeb74593807d909e3e26b7eabca34393bc1fe629e is running failed: container process not found" containerID="c152bceb49b0bb6d14cc008eeb74593807d909e3e26b7eabca34393bc1fe629e" cmd=["grpc_health_probe","-addr=:50051"] Dec 05 19:10:05 crc kubenswrapper[4828]: E1205 19:10:05.354136 4828 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c152bceb49b0bb6d14cc008eeb74593807d909e3e26b7eabca34393bc1fe629e is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-l5xtb" podUID="2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef" containerName="registry-server" Dec 05 19:10:05 crc kubenswrapper[4828]: I1205 19:10:05.543625 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l5xtb" Dec 05 19:10:05 crc kubenswrapper[4828]: I1205 19:10:05.714680 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef-catalog-content\") pod \"2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef\" (UID: \"2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef\") " Dec 05 19:10:05 crc kubenswrapper[4828]: I1205 19:10:05.714732 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdhhq\" (UniqueName: \"kubernetes.io/projected/2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef-kube-api-access-sdhhq\") pod \"2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef\" (UID: \"2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef\") " Dec 05 19:10:05 crc kubenswrapper[4828]: I1205 19:10:05.714773 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef-utilities\") pod \"2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef\" (UID: \"2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef\") " Dec 05 19:10:05 crc kubenswrapper[4828]: I1205 19:10:05.715872 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef-utilities" (OuterVolumeSpecName: "utilities") pod "2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef" (UID: "2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:10:05 crc kubenswrapper[4828]: I1205 19:10:05.725943 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef-kube-api-access-sdhhq" (OuterVolumeSpecName: "kube-api-access-sdhhq") pod "2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef" (UID: "2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef"). InnerVolumeSpecName "kube-api-access-sdhhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:10:05 crc kubenswrapper[4828]: I1205 19:10:05.764121 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef" (UID: "2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:10:05 crc kubenswrapper[4828]: I1205 19:10:05.815720 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:05 crc kubenswrapper[4828]: I1205 19:10:05.815756 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdhhq\" (UniqueName: \"kubernetes.io/projected/2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef-kube-api-access-sdhhq\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:05 crc kubenswrapper[4828]: I1205 19:10:05.815776 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:05 crc kubenswrapper[4828]: I1205 19:10:05.828798 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hcgdz" Dec 05 19:10:05 crc kubenswrapper[4828]: I1205 19:10:05.836250 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k624z" Dec 05 19:10:05 crc kubenswrapper[4828]: I1205 19:10:05.843653 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9zlnj" Dec 05 19:10:05 crc kubenswrapper[4828]: I1205 19:10:05.860812 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v2zjd" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.008078 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v2zjd" event={"ID":"a8f1d24b-86b2-4b82-a397-85027c6090f0","Type":"ContainerDied","Data":"582ae2eb18a7a2f402d846274b685876050267049f92fa57ec2ffb7dd534708d"} Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.008093 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v2zjd" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.008153 4828 scope.go:117] "RemoveContainer" containerID="42ead5e6551c1bf1bcc7df4ae1e92528a1263f35664301d308a7bc7f938afaa9" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.010868 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5xtb" event={"ID":"2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef","Type":"ContainerDied","Data":"a963efdfa9402b0d7709559694430189572bfe45ee1483b0dc219d5cc9dc0731"} Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.011092 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l5xtb" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.017383 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9zlnj" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.017371 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zlnj" event={"ID":"ee23b6fa-d318-49d6-91fb-1dacad01ad5f","Type":"ContainerDied","Data":"f38d4ce4656861d8956ea66528a5a67798df8c4c2ed13134967063b2a2141648"} Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.017607 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vq7jp\" (UniqueName: \"kubernetes.io/projected/ed1e0a7a-7a77-4343-8c33-e921e149ddab-kube-api-access-vq7jp\") pod \"ed1e0a7a-7a77-4343-8c33-e921e149ddab\" (UID: \"ed1e0a7a-7a77-4343-8c33-e921e149ddab\") " Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.017641 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gbc2\" (UniqueName: \"kubernetes.io/projected/ee23b6fa-d318-49d6-91fb-1dacad01ad5f-kube-api-access-4gbc2\") pod \"ee23b6fa-d318-49d6-91fb-1dacad01ad5f\" (UID: \"ee23b6fa-d318-49d6-91fb-1dacad01ad5f\") " Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.017677 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ed1e0a7a-7a77-4343-8c33-e921e149ddab-marketplace-trusted-ca\") pod \"ed1e0a7a-7a77-4343-8c33-e921e149ddab\" (UID: \"ed1e0a7a-7a77-4343-8c33-e921e149ddab\") " Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.017707 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee23b6fa-d318-49d6-91fb-1dacad01ad5f-catalog-content\") pod \"ee23b6fa-d318-49d6-91fb-1dacad01ad5f\" (UID: \"ee23b6fa-d318-49d6-91fb-1dacad01ad5f\") " Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.017728 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzvkz\" (UniqueName: \"kubernetes.io/projected/a8f1d24b-86b2-4b82-a397-85027c6090f0-kube-api-access-wzvkz\") pod \"a8f1d24b-86b2-4b82-a397-85027c6090f0\" (UID: \"a8f1d24b-86b2-4b82-a397-85027c6090f0\") " Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.017783 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee23b6fa-d318-49d6-91fb-1dacad01ad5f-utilities\") pod \"ee23b6fa-d318-49d6-91fb-1dacad01ad5f\" (UID: \"ee23b6fa-d318-49d6-91fb-1dacad01ad5f\") " Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.017817 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16299781-5338-4577-8a9a-2ec82c3b25b8-utilities\") pod \"16299781-5338-4577-8a9a-2ec82c3b25b8\" (UID: \"16299781-5338-4577-8a9a-2ec82c3b25b8\") " Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.017879 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8f1d24b-86b2-4b82-a397-85027c6090f0-catalog-content\") pod \"a8f1d24b-86b2-4b82-a397-85027c6090f0\" (UID: \"a8f1d24b-86b2-4b82-a397-85027c6090f0\") " Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.017897 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8f1d24b-86b2-4b82-a397-85027c6090f0-utilities\") pod \"a8f1d24b-86b2-4b82-a397-85027c6090f0\" (UID: \"a8f1d24b-86b2-4b82-a397-85027c6090f0\") " Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.017921 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mvvm\" (UniqueName: \"kubernetes.io/projected/16299781-5338-4577-8a9a-2ec82c3b25b8-kube-api-access-4mvvm\") pod \"16299781-5338-4577-8a9a-2ec82c3b25b8\" (UID: \"16299781-5338-4577-8a9a-2ec82c3b25b8\") " Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.017947 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ed1e0a7a-7a77-4343-8c33-e921e149ddab-marketplace-operator-metrics\") pod \"ed1e0a7a-7a77-4343-8c33-e921e149ddab\" (UID: \"ed1e0a7a-7a77-4343-8c33-e921e149ddab\") " Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.017963 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16299781-5338-4577-8a9a-2ec82c3b25b8-catalog-content\") pod \"16299781-5338-4577-8a9a-2ec82c3b25b8\" (UID: \"16299781-5338-4577-8a9a-2ec82c3b25b8\") " Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.018546 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed1e0a7a-7a77-4343-8c33-e921e149ddab-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "ed1e0a7a-7a77-4343-8c33-e921e149ddab" (UID: "ed1e0a7a-7a77-4343-8c33-e921e149ddab"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.019995 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16299781-5338-4577-8a9a-2ec82c3b25b8-utilities" (OuterVolumeSpecName: "utilities") pod "16299781-5338-4577-8a9a-2ec82c3b25b8" (UID: "16299781-5338-4577-8a9a-2ec82c3b25b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.021592 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee23b6fa-d318-49d6-91fb-1dacad01ad5f-utilities" (OuterVolumeSpecName: "utilities") pod "ee23b6fa-d318-49d6-91fb-1dacad01ad5f" (UID: "ee23b6fa-d318-49d6-91fb-1dacad01ad5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.020091 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9dx6f" event={"ID":"57b2cfb8-3ff2-4192-a272-2d6ef4ced1cd","Type":"ContainerStarted","Data":"71abc25812dd64584d5d9481118957711a0964181b767a20df56dd3d612c406a"} Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.022468 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9dx6f" event={"ID":"57b2cfb8-3ff2-4192-a272-2d6ef4ced1cd","Type":"ContainerStarted","Data":"e38963afa92ab5ba453e807388e1760b43411abe2738ce3d747945fbc6f5fef8"} Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.022636 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-9dx6f" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.023971 4828 scope.go:117] "RemoveContainer" containerID="a378b12b6712d2239882062fbf5d82613f1b1817e7b680b56d28c2de9b7e341c" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.025113 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8f1d24b-86b2-4b82-a397-85027c6090f0-kube-api-access-wzvkz" (OuterVolumeSpecName: "kube-api-access-wzvkz") pod "a8f1d24b-86b2-4b82-a397-85027c6090f0" (UID: "a8f1d24b-86b2-4b82-a397-85027c6090f0"). InnerVolumeSpecName "kube-api-access-wzvkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.025658 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8f1d24b-86b2-4b82-a397-85027c6090f0-utilities" (OuterVolumeSpecName: "utilities") pod "a8f1d24b-86b2-4b82-a397-85027c6090f0" (UID: "a8f1d24b-86b2-4b82-a397-85027c6090f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.027063 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hcgdz" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.027217 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-9dx6f" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.027967 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee23b6fa-d318-49d6-91fb-1dacad01ad5f-kube-api-access-4gbc2" (OuterVolumeSpecName: "kube-api-access-4gbc2") pod "ee23b6fa-d318-49d6-91fb-1dacad01ad5f" (UID: "ee23b6fa-d318-49d6-91fb-1dacad01ad5f"). InnerVolumeSpecName "kube-api-access-4gbc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.028148 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hcgdz" event={"ID":"ed1e0a7a-7a77-4343-8c33-e921e149ddab","Type":"ContainerDied","Data":"dc272372795e4302881905616aa3611e0adeb909667fa0bdf2b74eb29cc74840"} Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.028230 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed1e0a7a-7a77-4343-8c33-e921e149ddab-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "ed1e0a7a-7a77-4343-8c33-e921e149ddab" (UID: "ed1e0a7a-7a77-4343-8c33-e921e149ddab"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.030004 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16299781-5338-4577-8a9a-2ec82c3b25b8-kube-api-access-4mvvm" (OuterVolumeSpecName: "kube-api-access-4mvvm") pod "16299781-5338-4577-8a9a-2ec82c3b25b8" (UID: "16299781-5338-4577-8a9a-2ec82c3b25b8"). InnerVolumeSpecName "kube-api-access-4mvvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.030882 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed1e0a7a-7a77-4343-8c33-e921e149ddab-kube-api-access-vq7jp" (OuterVolumeSpecName: "kube-api-access-vq7jp") pod "ed1e0a7a-7a77-4343-8c33-e921e149ddab" (UID: "ed1e0a7a-7a77-4343-8c33-e921e149ddab"). InnerVolumeSpecName "kube-api-access-vq7jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.031457 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k624z" event={"ID":"16299781-5338-4577-8a9a-2ec82c3b25b8","Type":"ContainerDied","Data":"96f34d19564ad5dc743e774c848541c62904a14d1c945830eaf1e44ab3fbb105"} Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.031568 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k624z" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.047986 4828 scope.go:117] "RemoveContainer" containerID="1273993ba1a492beba1c123b69aaa2f215c9ba27cf5565c8f4e8dd86ffda8986" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.057312 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee23b6fa-d318-49d6-91fb-1dacad01ad5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee23b6fa-d318-49d6-91fb-1dacad01ad5f" (UID: "ee23b6fa-d318-49d6-91fb-1dacad01ad5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.067867 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-9dx6f" podStartSLOduration=2.067805339 podStartE2EDuration="2.067805339s" podCreationTimestamp="2025-12-05 19:10:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:10:06.045538936 +0000 UTC m=+383.940761242" watchObservedRunningTime="2025-12-05 19:10:06.067805339 +0000 UTC m=+383.963027645" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.076630 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16299781-5338-4577-8a9a-2ec82c3b25b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16299781-5338-4577-8a9a-2ec82c3b25b8" (UID: "16299781-5338-4577-8a9a-2ec82c3b25b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.082894 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l5xtb"] Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.101119 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l5xtb"] Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.101755 4828 scope.go:117] "RemoveContainer" containerID="c152bceb49b0bb6d14cc008eeb74593807d909e3e26b7eabca34393bc1fe629e" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.119811 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mvvm\" (UniqueName: \"kubernetes.io/projected/16299781-5338-4577-8a9a-2ec82c3b25b8-kube-api-access-4mvvm\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.119854 4828 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ed1e0a7a-7a77-4343-8c33-e921e149ddab-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.119864 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16299781-5338-4577-8a9a-2ec82c3b25b8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.119874 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vq7jp\" (UniqueName: \"kubernetes.io/projected/ed1e0a7a-7a77-4343-8c33-e921e149ddab-kube-api-access-vq7jp\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.119885 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gbc2\" (UniqueName: \"kubernetes.io/projected/ee23b6fa-d318-49d6-91fb-1dacad01ad5f-kube-api-access-4gbc2\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.119896 4828 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ed1e0a7a-7a77-4343-8c33-e921e149ddab-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.119905 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee23b6fa-d318-49d6-91fb-1dacad01ad5f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.119913 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzvkz\" (UniqueName: \"kubernetes.io/projected/a8f1d24b-86b2-4b82-a397-85027c6090f0-kube-api-access-wzvkz\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.119922 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee23b6fa-d318-49d6-91fb-1dacad01ad5f-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.119938 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16299781-5338-4577-8a9a-2ec82c3b25b8-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.119946 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8f1d24b-86b2-4b82-a397-85027c6090f0-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.125015 4828 scope.go:117] "RemoveContainer" containerID="496eb5a613a3af2d823948e5a6a431c4412a48199aa41a92364d8e356680c1c6" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.139310 4828 scope.go:117] "RemoveContainer" containerID="12c4e02ff89ceb7aacb7bc534d7e598911fd9945878cc69434cb3a24650b470e" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.152155 4828 scope.go:117] "RemoveContainer" containerID="5e4dc0a310c58d6631428726c53070cfe16bf2b5ddd36c37cde902c6d2beebfe" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.158693 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8f1d24b-86b2-4b82-a397-85027c6090f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8f1d24b-86b2-4b82-a397-85027c6090f0" (UID: "a8f1d24b-86b2-4b82-a397-85027c6090f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.171491 4828 scope.go:117] "RemoveContainer" containerID="be0a26dfda023570f6ab6bce2765fba396d263cc1f5f64ad1f0e01d76abb4e08" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.196072 4828 scope.go:117] "RemoveContainer" containerID="7828b439706cd72464043848f5e34ef594c8ef4c4750dce22996d8cb0727488b" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.217735 4828 scope.go:117] "RemoveContainer" containerID="873f186cc202eaabfdbf10f25cf53bb03fa939649f9f945fe93e179d4b3f7283" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.220940 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8f1d24b-86b2-4b82-a397-85027c6090f0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.231392 4828 scope.go:117] "RemoveContainer" containerID="45b09ef07805ccf43451b458b681fad92428b2328c6af786da113af11e8931cb" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.242079 4828 scope.go:117] "RemoveContainer" containerID="3bedbcd92585427088622036796cf54c94c45f2120a7849a790880170b257cad" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.254960 4828 scope.go:117] "RemoveContainer" containerID="5d610ecffb4f0132832ee2ede40d97a28d633f4fdcefcdd080eace2252f312b1" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.356392 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v2zjd"] Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.367672 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v2zjd"] Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.381629 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9zlnj"] Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.387862 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9zlnj"] Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.400022 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hcgdz"] Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.405213 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hcgdz"] Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.409771 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k624z"] Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.413624 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k624z"] Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.452439 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16299781-5338-4577-8a9a-2ec82c3b25b8" path="/var/lib/kubelet/pods/16299781-5338-4577-8a9a-2ec82c3b25b8/volumes" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.453220 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef" path="/var/lib/kubelet/pods/2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef/volumes" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.453863 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8f1d24b-86b2-4b82-a397-85027c6090f0" path="/var/lib/kubelet/pods/a8f1d24b-86b2-4b82-a397-85027c6090f0/volumes" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.455047 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed1e0a7a-7a77-4343-8c33-e921e149ddab" path="/var/lib/kubelet/pods/ed1e0a7a-7a77-4343-8c33-e921e149ddab/volumes" Dec 05 19:10:06 crc kubenswrapper[4828]: I1205 19:10:06.455525 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee23b6fa-d318-49d6-91fb-1dacad01ad5f" path="/var/lib/kubelet/pods/ee23b6fa-d318-49d6-91fb-1dacad01ad5f/volumes" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.431900 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bch5n"] Dec 05 19:10:07 crc kubenswrapper[4828]: E1205 19:10:07.432336 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee23b6fa-d318-49d6-91fb-1dacad01ad5f" containerName="registry-server" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.432354 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee23b6fa-d318-49d6-91fb-1dacad01ad5f" containerName="registry-server" Dec 05 19:10:07 crc kubenswrapper[4828]: E1205 19:10:07.432384 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee23b6fa-d318-49d6-91fb-1dacad01ad5f" containerName="extract-content" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.432392 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee23b6fa-d318-49d6-91fb-1dacad01ad5f" containerName="extract-content" Dec 05 19:10:07 crc kubenswrapper[4828]: E1205 19:10:07.432404 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16299781-5338-4577-8a9a-2ec82c3b25b8" containerName="extract-content" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.432413 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="16299781-5338-4577-8a9a-2ec82c3b25b8" containerName="extract-content" Dec 05 19:10:07 crc kubenswrapper[4828]: E1205 19:10:07.432433 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef" containerName="extract-content" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.432441 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef" containerName="extract-content" Dec 05 19:10:07 crc kubenswrapper[4828]: E1205 19:10:07.432457 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f1d24b-86b2-4b82-a397-85027c6090f0" containerName="extract-content" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.432466 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f1d24b-86b2-4b82-a397-85027c6090f0" containerName="extract-content" Dec 05 19:10:07 crc kubenswrapper[4828]: E1205 19:10:07.432478 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16299781-5338-4577-8a9a-2ec82c3b25b8" containerName="extract-utilities" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.432486 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="16299781-5338-4577-8a9a-2ec82c3b25b8" containerName="extract-utilities" Dec 05 19:10:07 crc kubenswrapper[4828]: E1205 19:10:07.432504 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f1d24b-86b2-4b82-a397-85027c6090f0" containerName="registry-server" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.432511 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f1d24b-86b2-4b82-a397-85027c6090f0" containerName="registry-server" Dec 05 19:10:07 crc kubenswrapper[4828]: E1205 19:10:07.432530 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16299781-5338-4577-8a9a-2ec82c3b25b8" containerName="registry-server" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.432543 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="16299781-5338-4577-8a9a-2ec82c3b25b8" containerName="registry-server" Dec 05 19:10:07 crc kubenswrapper[4828]: E1205 19:10:07.432553 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee23b6fa-d318-49d6-91fb-1dacad01ad5f" containerName="extract-utilities" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.432561 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee23b6fa-d318-49d6-91fb-1dacad01ad5f" containerName="extract-utilities" Dec 05 19:10:07 crc kubenswrapper[4828]: E1205 19:10:07.432580 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed1e0a7a-7a77-4343-8c33-e921e149ddab" containerName="marketplace-operator" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.432588 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed1e0a7a-7a77-4343-8c33-e921e149ddab" containerName="marketplace-operator" Dec 05 19:10:07 crc kubenswrapper[4828]: E1205 19:10:07.432610 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef" containerName="extract-utilities" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.432618 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef" containerName="extract-utilities" Dec 05 19:10:07 crc kubenswrapper[4828]: E1205 19:10:07.432634 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef" containerName="registry-server" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.432641 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef" containerName="registry-server" Dec 05 19:10:07 crc kubenswrapper[4828]: E1205 19:10:07.432652 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f1d24b-86b2-4b82-a397-85027c6090f0" containerName="extract-utilities" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.432660 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f1d24b-86b2-4b82-a397-85027c6090f0" containerName="extract-utilities" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.432912 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed1e0a7a-7a77-4343-8c33-e921e149ddab" containerName="marketplace-operator" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.432935 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee23b6fa-d318-49d6-91fb-1dacad01ad5f" containerName="registry-server" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.432952 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8f1d24b-86b2-4b82-a397-85027c6090f0" containerName="registry-server" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.432969 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="16299781-5338-4577-8a9a-2ec82c3b25b8" containerName="registry-server" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.432982 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e2ef9c7-5aba-42fe-b2b2-7e115361a9ef" containerName="registry-server" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.434575 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bch5n" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.437863 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.443424 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bch5n"] Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.536227 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1b4b588-b3c8-4a99-b13c-89413002545e-utilities\") pod \"community-operators-bch5n\" (UID: \"d1b4b588-b3c8-4a99-b13c-89413002545e\") " pod="openshift-marketplace/community-operators-bch5n" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.536332 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd97n\" (UniqueName: \"kubernetes.io/projected/d1b4b588-b3c8-4a99-b13c-89413002545e-kube-api-access-kd97n\") pod \"community-operators-bch5n\" (UID: \"d1b4b588-b3c8-4a99-b13c-89413002545e\") " pod="openshift-marketplace/community-operators-bch5n" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.536379 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1b4b588-b3c8-4a99-b13c-89413002545e-catalog-content\") pod \"community-operators-bch5n\" (UID: \"d1b4b588-b3c8-4a99-b13c-89413002545e\") " pod="openshift-marketplace/community-operators-bch5n" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.637729 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1b4b588-b3c8-4a99-b13c-89413002545e-catalog-content\") pod \"community-operators-bch5n\" (UID: \"d1b4b588-b3c8-4a99-b13c-89413002545e\") " pod="openshift-marketplace/community-operators-bch5n" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.637777 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1b4b588-b3c8-4a99-b13c-89413002545e-utilities\") pod \"community-operators-bch5n\" (UID: \"d1b4b588-b3c8-4a99-b13c-89413002545e\") " pod="openshift-marketplace/community-operators-bch5n" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.637858 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd97n\" (UniqueName: \"kubernetes.io/projected/d1b4b588-b3c8-4a99-b13c-89413002545e-kube-api-access-kd97n\") pod \"community-operators-bch5n\" (UID: \"d1b4b588-b3c8-4a99-b13c-89413002545e\") " pod="openshift-marketplace/community-operators-bch5n" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.638258 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1b4b588-b3c8-4a99-b13c-89413002545e-catalog-content\") pod \"community-operators-bch5n\" (UID: \"d1b4b588-b3c8-4a99-b13c-89413002545e\") " pod="openshift-marketplace/community-operators-bch5n" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.638273 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1b4b588-b3c8-4a99-b13c-89413002545e-utilities\") pod \"community-operators-bch5n\" (UID: \"d1b4b588-b3c8-4a99-b13c-89413002545e\") " pod="openshift-marketplace/community-operators-bch5n" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.658653 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd97n\" (UniqueName: \"kubernetes.io/projected/d1b4b588-b3c8-4a99-b13c-89413002545e-kube-api-access-kd97n\") pod \"community-operators-bch5n\" (UID: \"d1b4b588-b3c8-4a99-b13c-89413002545e\") " pod="openshift-marketplace/community-operators-bch5n" Dec 05 19:10:07 crc kubenswrapper[4828]: I1205 19:10:07.763504 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bch5n" Dec 05 19:10:08 crc kubenswrapper[4828]: I1205 19:10:08.023164 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pmljs"] Dec 05 19:10:08 crc kubenswrapper[4828]: I1205 19:10:08.024586 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pmljs" Dec 05 19:10:08 crc kubenswrapper[4828]: I1205 19:10:08.027057 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 05 19:10:08 crc kubenswrapper[4828]: I1205 19:10:08.034496 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmljs"] Dec 05 19:10:08 crc kubenswrapper[4828]: I1205 19:10:08.144793 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a4ba139-0d26-4f2c-b265-35af463685f1-utilities\") pod \"redhat-marketplace-pmljs\" (UID: \"4a4ba139-0d26-4f2c-b265-35af463685f1\") " pod="openshift-marketplace/redhat-marketplace-pmljs" Dec 05 19:10:08 crc kubenswrapper[4828]: I1205 19:10:08.144858 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkr9w\" (UniqueName: \"kubernetes.io/projected/4a4ba139-0d26-4f2c-b265-35af463685f1-kube-api-access-dkr9w\") pod \"redhat-marketplace-pmljs\" (UID: \"4a4ba139-0d26-4f2c-b265-35af463685f1\") " pod="openshift-marketplace/redhat-marketplace-pmljs" Dec 05 19:10:08 crc kubenswrapper[4828]: I1205 19:10:08.144927 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a4ba139-0d26-4f2c-b265-35af463685f1-catalog-content\") pod \"redhat-marketplace-pmljs\" (UID: \"4a4ba139-0d26-4f2c-b265-35af463685f1\") " pod="openshift-marketplace/redhat-marketplace-pmljs" Dec 05 19:10:08 crc kubenswrapper[4828]: I1205 19:10:08.219260 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bch5n"] Dec 05 19:10:08 crc kubenswrapper[4828]: I1205 19:10:08.247479 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkr9w\" (UniqueName: \"kubernetes.io/projected/4a4ba139-0d26-4f2c-b265-35af463685f1-kube-api-access-dkr9w\") pod \"redhat-marketplace-pmljs\" (UID: \"4a4ba139-0d26-4f2c-b265-35af463685f1\") " pod="openshift-marketplace/redhat-marketplace-pmljs" Dec 05 19:10:08 crc kubenswrapper[4828]: I1205 19:10:08.247531 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a4ba139-0d26-4f2c-b265-35af463685f1-catalog-content\") pod \"redhat-marketplace-pmljs\" (UID: \"4a4ba139-0d26-4f2c-b265-35af463685f1\") " pod="openshift-marketplace/redhat-marketplace-pmljs" Dec 05 19:10:08 crc kubenswrapper[4828]: I1205 19:10:08.247601 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a4ba139-0d26-4f2c-b265-35af463685f1-utilities\") pod \"redhat-marketplace-pmljs\" (UID: \"4a4ba139-0d26-4f2c-b265-35af463685f1\") " pod="openshift-marketplace/redhat-marketplace-pmljs" Dec 05 19:10:08 crc kubenswrapper[4828]: I1205 19:10:08.248085 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a4ba139-0d26-4f2c-b265-35af463685f1-utilities\") pod \"redhat-marketplace-pmljs\" (UID: \"4a4ba139-0d26-4f2c-b265-35af463685f1\") " pod="openshift-marketplace/redhat-marketplace-pmljs" Dec 05 19:10:08 crc kubenswrapper[4828]: I1205 19:10:08.248179 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a4ba139-0d26-4f2c-b265-35af463685f1-catalog-content\") pod \"redhat-marketplace-pmljs\" (UID: \"4a4ba139-0d26-4f2c-b265-35af463685f1\") " pod="openshift-marketplace/redhat-marketplace-pmljs" Dec 05 19:10:08 crc kubenswrapper[4828]: I1205 19:10:08.264182 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkr9w\" (UniqueName: \"kubernetes.io/projected/4a4ba139-0d26-4f2c-b265-35af463685f1-kube-api-access-dkr9w\") pod \"redhat-marketplace-pmljs\" (UID: \"4a4ba139-0d26-4f2c-b265-35af463685f1\") " pod="openshift-marketplace/redhat-marketplace-pmljs" Dec 05 19:10:08 crc kubenswrapper[4828]: I1205 19:10:08.345592 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pmljs" Dec 05 19:10:08 crc kubenswrapper[4828]: I1205 19:10:08.758523 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmljs"] Dec 05 19:10:08 crc kubenswrapper[4828]: W1205 19:10:08.763145 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a4ba139_0d26_4f2c_b265_35af463685f1.slice/crio-f51017b2965791b6ec5586d4d2e46e43b8996c1fcfa0a7d424bef0c847e2051d WatchSource:0}: Error finding container f51017b2965791b6ec5586d4d2e46e43b8996c1fcfa0a7d424bef0c847e2051d: Status 404 returned error can't find the container with id f51017b2965791b6ec5586d4d2e46e43b8996c1fcfa0a7d424bef0c847e2051d Dec 05 19:10:09 crc kubenswrapper[4828]: I1205 19:10:09.056482 4828 generic.go:334] "Generic (PLEG): container finished" podID="d1b4b588-b3c8-4a99-b13c-89413002545e" containerID="6d6185c671cdd6ed43bb11fe24b9c778c4ba30357e26dd6faaa2ee79cb16c1b7" exitCode=0 Dec 05 19:10:09 crc kubenswrapper[4828]: I1205 19:10:09.056579 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bch5n" event={"ID":"d1b4b588-b3c8-4a99-b13c-89413002545e","Type":"ContainerDied","Data":"6d6185c671cdd6ed43bb11fe24b9c778c4ba30357e26dd6faaa2ee79cb16c1b7"} Dec 05 19:10:09 crc kubenswrapper[4828]: I1205 19:10:09.056858 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bch5n" event={"ID":"d1b4b588-b3c8-4a99-b13c-89413002545e","Type":"ContainerStarted","Data":"e22875062d105b9d24e8a609dd9fffb6f6cb5d7f0bdeae78ec8f3037d19a4689"} Dec 05 19:10:09 crc kubenswrapper[4828]: I1205 19:10:09.058709 4828 generic.go:334] "Generic (PLEG): container finished" podID="4a4ba139-0d26-4f2c-b265-35af463685f1" containerID="b15111018179d2471bec9a0270ce0c7065e6f3c80c175b8754feddcc56409e83" exitCode=0 Dec 05 19:10:09 crc kubenswrapper[4828]: I1205 19:10:09.058760 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmljs" event={"ID":"4a4ba139-0d26-4f2c-b265-35af463685f1","Type":"ContainerDied","Data":"b15111018179d2471bec9a0270ce0c7065e6f3c80c175b8754feddcc56409e83"} Dec 05 19:10:09 crc kubenswrapper[4828]: I1205 19:10:09.058794 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmljs" event={"ID":"4a4ba139-0d26-4f2c-b265-35af463685f1","Type":"ContainerStarted","Data":"f51017b2965791b6ec5586d4d2e46e43b8996c1fcfa0a7d424bef0c847e2051d"} Dec 05 19:10:09 crc kubenswrapper[4828]: I1205 19:10:09.823786 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ztsk4"] Dec 05 19:10:09 crc kubenswrapper[4828]: I1205 19:10:09.825248 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ztsk4" Dec 05 19:10:09 crc kubenswrapper[4828]: I1205 19:10:09.831464 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 05 19:10:09 crc kubenswrapper[4828]: I1205 19:10:09.835043 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ztsk4"] Dec 05 19:10:09 crc kubenswrapper[4828]: I1205 19:10:09.980480 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a85d300-213b-4bda-aff7-73bc53e7e246-utilities\") pod \"redhat-operators-ztsk4\" (UID: \"8a85d300-213b-4bda-aff7-73bc53e7e246\") " pod="openshift-marketplace/redhat-operators-ztsk4" Dec 05 19:10:09 crc kubenswrapper[4828]: I1205 19:10:09.980546 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a85d300-213b-4bda-aff7-73bc53e7e246-catalog-content\") pod \"redhat-operators-ztsk4\" (UID: \"8a85d300-213b-4bda-aff7-73bc53e7e246\") " pod="openshift-marketplace/redhat-operators-ztsk4" Dec 05 19:10:09 crc kubenswrapper[4828]: I1205 19:10:09.980593 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvp99\" (UniqueName: \"kubernetes.io/projected/8a85d300-213b-4bda-aff7-73bc53e7e246-kube-api-access-pvp99\") pod \"redhat-operators-ztsk4\" (UID: \"8a85d300-213b-4bda-aff7-73bc53e7e246\") " pod="openshift-marketplace/redhat-operators-ztsk4" Dec 05 19:10:10 crc kubenswrapper[4828]: I1205 19:10:10.065203 4828 generic.go:334] "Generic (PLEG): container finished" podID="4a4ba139-0d26-4f2c-b265-35af463685f1" containerID="817f752b814920353a48ac73168c0f30fb6aa3671b925b217cb8befd249dcd3e" exitCode=0 Dec 05 19:10:10 crc kubenswrapper[4828]: I1205 19:10:10.065270 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmljs" event={"ID":"4a4ba139-0d26-4f2c-b265-35af463685f1","Type":"ContainerDied","Data":"817f752b814920353a48ac73168c0f30fb6aa3671b925b217cb8befd249dcd3e"} Dec 05 19:10:10 crc kubenswrapper[4828]: I1205 19:10:10.068700 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bch5n" event={"ID":"d1b4b588-b3c8-4a99-b13c-89413002545e","Type":"ContainerStarted","Data":"04ac7327e5a698019c1a44f384acc79994681dde8e290fc648824d64020598a1"} Dec 05 19:10:10 crc kubenswrapper[4828]: I1205 19:10:10.081062 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvp99\" (UniqueName: \"kubernetes.io/projected/8a85d300-213b-4bda-aff7-73bc53e7e246-kube-api-access-pvp99\") pod \"redhat-operators-ztsk4\" (UID: \"8a85d300-213b-4bda-aff7-73bc53e7e246\") " pod="openshift-marketplace/redhat-operators-ztsk4" Dec 05 19:10:10 crc kubenswrapper[4828]: I1205 19:10:10.081304 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a85d300-213b-4bda-aff7-73bc53e7e246-utilities\") pod \"redhat-operators-ztsk4\" (UID: \"8a85d300-213b-4bda-aff7-73bc53e7e246\") " pod="openshift-marketplace/redhat-operators-ztsk4" Dec 05 19:10:10 crc kubenswrapper[4828]: I1205 19:10:10.081366 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a85d300-213b-4bda-aff7-73bc53e7e246-catalog-content\") pod \"redhat-operators-ztsk4\" (UID: \"8a85d300-213b-4bda-aff7-73bc53e7e246\") " pod="openshift-marketplace/redhat-operators-ztsk4" Dec 05 19:10:10 crc kubenswrapper[4828]: I1205 19:10:10.085424 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a85d300-213b-4bda-aff7-73bc53e7e246-utilities\") pod \"redhat-operators-ztsk4\" (UID: \"8a85d300-213b-4bda-aff7-73bc53e7e246\") " pod="openshift-marketplace/redhat-operators-ztsk4" Dec 05 19:10:10 crc kubenswrapper[4828]: I1205 19:10:10.085529 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a85d300-213b-4bda-aff7-73bc53e7e246-catalog-content\") pod \"redhat-operators-ztsk4\" (UID: \"8a85d300-213b-4bda-aff7-73bc53e7e246\") " pod="openshift-marketplace/redhat-operators-ztsk4" Dec 05 19:10:10 crc kubenswrapper[4828]: I1205 19:10:10.105409 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvp99\" (UniqueName: \"kubernetes.io/projected/8a85d300-213b-4bda-aff7-73bc53e7e246-kube-api-access-pvp99\") pod \"redhat-operators-ztsk4\" (UID: \"8a85d300-213b-4bda-aff7-73bc53e7e246\") " pod="openshift-marketplace/redhat-operators-ztsk4" Dec 05 19:10:10 crc kubenswrapper[4828]: I1205 19:10:10.155035 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ztsk4" Dec 05 19:10:10 crc kubenswrapper[4828]: I1205 19:10:10.424398 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2tlrr"] Dec 05 19:10:10 crc kubenswrapper[4828]: I1205 19:10:10.428686 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2tlrr" Dec 05 19:10:10 crc kubenswrapper[4828]: I1205 19:10:10.430252 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2tlrr"] Dec 05 19:10:10 crc kubenswrapper[4828]: I1205 19:10:10.431862 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 05 19:10:10 crc kubenswrapper[4828]: I1205 19:10:10.548593 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ztsk4"] Dec 05 19:10:10 crc kubenswrapper[4828]: W1205 19:10:10.554231 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a85d300_213b_4bda_aff7_73bc53e7e246.slice/crio-906c62ac9e105c5573b80bf07a15c9a3102be3c0a2dbced7bc3e20fa82336529 WatchSource:0}: Error finding container 906c62ac9e105c5573b80bf07a15c9a3102be3c0a2dbced7bc3e20fa82336529: Status 404 returned error can't find the container with id 906c62ac9e105c5573b80bf07a15c9a3102be3c0a2dbced7bc3e20fa82336529 Dec 05 19:10:10 crc kubenswrapper[4828]: I1205 19:10:10.587777 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e56fecb-3765-4d29-9c94-02257c7e655b-catalog-content\") pod \"certified-operators-2tlrr\" (UID: \"1e56fecb-3765-4d29-9c94-02257c7e655b\") " pod="openshift-marketplace/certified-operators-2tlrr" Dec 05 19:10:10 crc kubenswrapper[4828]: I1205 19:10:10.587839 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e56fecb-3765-4d29-9c94-02257c7e655b-utilities\") pod \"certified-operators-2tlrr\" (UID: \"1e56fecb-3765-4d29-9c94-02257c7e655b\") " pod="openshift-marketplace/certified-operators-2tlrr" Dec 05 19:10:10 crc kubenswrapper[4828]: I1205 19:10:10.587997 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nz98\" (UniqueName: \"kubernetes.io/projected/1e56fecb-3765-4d29-9c94-02257c7e655b-kube-api-access-6nz98\") pod \"certified-operators-2tlrr\" (UID: \"1e56fecb-3765-4d29-9c94-02257c7e655b\") " pod="openshift-marketplace/certified-operators-2tlrr" Dec 05 19:10:10 crc kubenswrapper[4828]: I1205 19:10:10.689398 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e56fecb-3765-4d29-9c94-02257c7e655b-catalog-content\") pod \"certified-operators-2tlrr\" (UID: \"1e56fecb-3765-4d29-9c94-02257c7e655b\") " pod="openshift-marketplace/certified-operators-2tlrr" Dec 05 19:10:10 crc kubenswrapper[4828]: I1205 19:10:10.689445 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e56fecb-3765-4d29-9c94-02257c7e655b-utilities\") pod \"certified-operators-2tlrr\" (UID: \"1e56fecb-3765-4d29-9c94-02257c7e655b\") " pod="openshift-marketplace/certified-operators-2tlrr" Dec 05 19:10:10 crc kubenswrapper[4828]: I1205 19:10:10.689488 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nz98\" (UniqueName: \"kubernetes.io/projected/1e56fecb-3765-4d29-9c94-02257c7e655b-kube-api-access-6nz98\") pod \"certified-operators-2tlrr\" (UID: \"1e56fecb-3765-4d29-9c94-02257c7e655b\") " pod="openshift-marketplace/certified-operators-2tlrr" Dec 05 19:10:10 crc kubenswrapper[4828]: I1205 19:10:10.690088 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e56fecb-3765-4d29-9c94-02257c7e655b-catalog-content\") pod \"certified-operators-2tlrr\" (UID: \"1e56fecb-3765-4d29-9c94-02257c7e655b\") " pod="openshift-marketplace/certified-operators-2tlrr" Dec 05 19:10:10 crc kubenswrapper[4828]: I1205 19:10:10.690300 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e56fecb-3765-4d29-9c94-02257c7e655b-utilities\") pod \"certified-operators-2tlrr\" (UID: \"1e56fecb-3765-4d29-9c94-02257c7e655b\") " pod="openshift-marketplace/certified-operators-2tlrr" Dec 05 19:10:10 crc kubenswrapper[4828]: I1205 19:10:10.708683 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nz98\" (UniqueName: \"kubernetes.io/projected/1e56fecb-3765-4d29-9c94-02257c7e655b-kube-api-access-6nz98\") pod \"certified-operators-2tlrr\" (UID: \"1e56fecb-3765-4d29-9c94-02257c7e655b\") " pod="openshift-marketplace/certified-operators-2tlrr" Dec 05 19:10:10 crc kubenswrapper[4828]: I1205 19:10:10.744239 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2tlrr" Dec 05 19:10:11 crc kubenswrapper[4828]: I1205 19:10:11.075044 4828 generic.go:334] "Generic (PLEG): container finished" podID="d1b4b588-b3c8-4a99-b13c-89413002545e" containerID="04ac7327e5a698019c1a44f384acc79994681dde8e290fc648824d64020598a1" exitCode=0 Dec 05 19:10:11 crc kubenswrapper[4828]: I1205 19:10:11.075139 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bch5n" event={"ID":"d1b4b588-b3c8-4a99-b13c-89413002545e","Type":"ContainerDied","Data":"04ac7327e5a698019c1a44f384acc79994681dde8e290fc648824d64020598a1"} Dec 05 19:10:11 crc kubenswrapper[4828]: I1205 19:10:11.078313 4828 generic.go:334] "Generic (PLEG): container finished" podID="8a85d300-213b-4bda-aff7-73bc53e7e246" containerID="69dc37abbd4bae25c13bbd41cc0871d26a0111d8ab2c799a71bacb33937b8e3f" exitCode=0 Dec 05 19:10:11 crc kubenswrapper[4828]: I1205 19:10:11.078840 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztsk4" event={"ID":"8a85d300-213b-4bda-aff7-73bc53e7e246","Type":"ContainerDied","Data":"69dc37abbd4bae25c13bbd41cc0871d26a0111d8ab2c799a71bacb33937b8e3f"} Dec 05 19:10:11 crc kubenswrapper[4828]: I1205 19:10:11.078870 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztsk4" event={"ID":"8a85d300-213b-4bda-aff7-73bc53e7e246","Type":"ContainerStarted","Data":"906c62ac9e105c5573b80bf07a15c9a3102be3c0a2dbced7bc3e20fa82336529"} Dec 05 19:10:11 crc kubenswrapper[4828]: I1205 19:10:11.084112 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmljs" event={"ID":"4a4ba139-0d26-4f2c-b265-35af463685f1","Type":"ContainerStarted","Data":"0129cb2b592e69d0bf7dc3ae33d64f519c6c6d6f5b751f16c3be5e6235949d93"} Dec 05 19:10:11 crc kubenswrapper[4828]: I1205 19:10:11.132789 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pmljs" podStartSLOduration=1.730133791 podStartE2EDuration="3.132767599s" podCreationTimestamp="2025-12-05 19:10:08 +0000 UTC" firstStartedPulling="2025-12-05 19:10:09.061539424 +0000 UTC m=+386.956761750" lastFinishedPulling="2025-12-05 19:10:10.464173252 +0000 UTC m=+388.359395558" observedRunningTime="2025-12-05 19:10:11.126430253 +0000 UTC m=+389.021652569" watchObservedRunningTime="2025-12-05 19:10:11.132767599 +0000 UTC m=+389.027989955" Dec 05 19:10:11 crc kubenswrapper[4828]: I1205 19:10:11.142201 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2tlrr"] Dec 05 19:10:11 crc kubenswrapper[4828]: W1205 19:10:11.149043 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e56fecb_3765_4d29_9c94_02257c7e655b.slice/crio-3d3871ae6f58e64c4995e983a6b3757b008e1148b8782686f86b5a0b5c758345 WatchSource:0}: Error finding container 3d3871ae6f58e64c4995e983a6b3757b008e1148b8782686f86b5a0b5c758345: Status 404 returned error can't find the container with id 3d3871ae6f58e64c4995e983a6b3757b008e1148b8782686f86b5a0b5c758345 Dec 05 19:10:12 crc kubenswrapper[4828]: I1205 19:10:12.091885 4828 generic.go:334] "Generic (PLEG): container finished" podID="1e56fecb-3765-4d29-9c94-02257c7e655b" containerID="264e3354c19edfc4eabe99602cd89ec1fc5e989c9da3e99c4ebcc1c16d87ed9f" exitCode=0 Dec 05 19:10:12 crc kubenswrapper[4828]: I1205 19:10:12.091964 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tlrr" event={"ID":"1e56fecb-3765-4d29-9c94-02257c7e655b","Type":"ContainerDied","Data":"264e3354c19edfc4eabe99602cd89ec1fc5e989c9da3e99c4ebcc1c16d87ed9f"} Dec 05 19:10:12 crc kubenswrapper[4828]: I1205 19:10:12.092394 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tlrr" event={"ID":"1e56fecb-3765-4d29-9c94-02257c7e655b","Type":"ContainerStarted","Data":"3d3871ae6f58e64c4995e983a6b3757b008e1148b8782686f86b5a0b5c758345"} Dec 05 19:10:12 crc kubenswrapper[4828]: I1205 19:10:12.094396 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztsk4" event={"ID":"8a85d300-213b-4bda-aff7-73bc53e7e246","Type":"ContainerStarted","Data":"2448f6663690c2fa268ccf951e6c71574787c7237042f811e61812754044e35b"} Dec 05 19:10:12 crc kubenswrapper[4828]: I1205 19:10:12.097710 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bch5n" event={"ID":"d1b4b588-b3c8-4a99-b13c-89413002545e","Type":"ContainerStarted","Data":"ae574a441aa975245fe4eb54ce74161016021b55fa3f1187a2300c603660f08c"} Dec 05 19:10:12 crc kubenswrapper[4828]: I1205 19:10:12.154318 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bch5n" podStartSLOduration=2.7416670229999998 podStartE2EDuration="5.154302452s" podCreationTimestamp="2025-12-05 19:10:07 +0000 UTC" firstStartedPulling="2025-12-05 19:10:09.059037289 +0000 UTC m=+386.954259615" lastFinishedPulling="2025-12-05 19:10:11.471672738 +0000 UTC m=+389.366895044" observedRunningTime="2025-12-05 19:10:12.150885163 +0000 UTC m=+390.046107469" watchObservedRunningTime="2025-12-05 19:10:12.154302452 +0000 UTC m=+390.049524758" Dec 05 19:10:12 crc kubenswrapper[4828]: I1205 19:10:12.336452 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b85888b7c-tmwml"] Dec 05 19:10:12 crc kubenswrapper[4828]: I1205 19:10:12.336878 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5b85888b7c-tmwml" podUID="6622e706-e3b7-4c7c-923c-6b0ae67b16bb" containerName="controller-manager" containerID="cri-o://37b4d8e62b5fb2c1b122d3caa56682705429ed0ab53d41440cdd8cd47a57d7a8" gracePeriod=30 Dec 05 19:10:12 crc kubenswrapper[4828]: I1205 19:10:12.367959 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7484d9ddcc-vhw9g"] Dec 05 19:10:12 crc kubenswrapper[4828]: I1205 19:10:12.368162 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-vhw9g" podUID="9ff75179-ff05-4afd-8037-7c3a38535ed0" containerName="route-controller-manager" containerID="cri-o://f58d271a76c7343c4732044372bb3988108eae100dae1bdc746e4db74151c645" gracePeriod=30 Dec 05 19:10:12 crc kubenswrapper[4828]: I1205 19:10:12.959690 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-6wz5h"] Dec 05 19:10:12 crc kubenswrapper[4828]: I1205 19:10:12.960715 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-6wz5h" Dec 05 19:10:12 crc kubenswrapper[4828]: I1205 19:10:12.980936 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-6wz5h"] Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.116520 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7psj8\" (UniqueName: \"kubernetes.io/projected/a2b7b095-f2c4-442e-8df5-0343dca4fb74-kube-api-access-7psj8\") pod \"image-registry-66df7c8f76-6wz5h\" (UID: \"a2b7b095-f2c4-442e-8df5-0343dca4fb74\") " pod="openshift-image-registry/image-registry-66df7c8f76-6wz5h" Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.116902 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-6wz5h\" (UID: \"a2b7b095-f2c4-442e-8df5-0343dca4fb74\") " pod="openshift-image-registry/image-registry-66df7c8f76-6wz5h" Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.116941 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a2b7b095-f2c4-442e-8df5-0343dca4fb74-registry-certificates\") pod \"image-registry-66df7c8f76-6wz5h\" (UID: \"a2b7b095-f2c4-442e-8df5-0343dca4fb74\") " pod="openshift-image-registry/image-registry-66df7c8f76-6wz5h" Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.116972 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a2b7b095-f2c4-442e-8df5-0343dca4fb74-installation-pull-secrets\") pod \"image-registry-66df7c8f76-6wz5h\" (UID: \"a2b7b095-f2c4-442e-8df5-0343dca4fb74\") " pod="openshift-image-registry/image-registry-66df7c8f76-6wz5h" Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.116996 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a2b7b095-f2c4-442e-8df5-0343dca4fb74-bound-sa-token\") pod \"image-registry-66df7c8f76-6wz5h\" (UID: \"a2b7b095-f2c4-442e-8df5-0343dca4fb74\") " pod="openshift-image-registry/image-registry-66df7c8f76-6wz5h" Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.117024 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2b7b095-f2c4-442e-8df5-0343dca4fb74-trusted-ca\") pod \"image-registry-66df7c8f76-6wz5h\" (UID: \"a2b7b095-f2c4-442e-8df5-0343dca4fb74\") " pod="openshift-image-registry/image-registry-66df7c8f76-6wz5h" Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.117059 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a2b7b095-f2c4-442e-8df5-0343dca4fb74-registry-tls\") pod \"image-registry-66df7c8f76-6wz5h\" (UID: \"a2b7b095-f2c4-442e-8df5-0343dca4fb74\") " pod="openshift-image-registry/image-registry-66df7c8f76-6wz5h" Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.117091 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a2b7b095-f2c4-442e-8df5-0343dca4fb74-ca-trust-extracted\") pod \"image-registry-66df7c8f76-6wz5h\" (UID: \"a2b7b095-f2c4-442e-8df5-0343dca4fb74\") " pod="openshift-image-registry/image-registry-66df7c8f76-6wz5h" Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.119432 4828 generic.go:334] "Generic (PLEG): container finished" podID="6622e706-e3b7-4c7c-923c-6b0ae67b16bb" containerID="37b4d8e62b5fb2c1b122d3caa56682705429ed0ab53d41440cdd8cd47a57d7a8" exitCode=0 Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.119869 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b85888b7c-tmwml" event={"ID":"6622e706-e3b7-4c7c-923c-6b0ae67b16bb","Type":"ContainerDied","Data":"37b4d8e62b5fb2c1b122d3caa56682705429ed0ab53d41440cdd8cd47a57d7a8"} Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.125743 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tlrr" event={"ID":"1e56fecb-3765-4d29-9c94-02257c7e655b","Type":"ContainerStarted","Data":"6d4e30bbabc84f7232212b45aecbe216b028f42ae047437d085672b21c98c7a9"} Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.127941 4828 generic.go:334] "Generic (PLEG): container finished" podID="8a85d300-213b-4bda-aff7-73bc53e7e246" containerID="2448f6663690c2fa268ccf951e6c71574787c7237042f811e61812754044e35b" exitCode=0 Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.128020 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztsk4" event={"ID":"8a85d300-213b-4bda-aff7-73bc53e7e246","Type":"ContainerDied","Data":"2448f6663690c2fa268ccf951e6c71574787c7237042f811e61812754044e35b"} Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.141120 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-6wz5h\" (UID: \"a2b7b095-f2c4-442e-8df5-0343dca4fb74\") " pod="openshift-image-registry/image-registry-66df7c8f76-6wz5h" Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.147999 4828 generic.go:334] "Generic (PLEG): container finished" podID="9ff75179-ff05-4afd-8037-7c3a38535ed0" containerID="f58d271a76c7343c4732044372bb3988108eae100dae1bdc746e4db74151c645" exitCode=0 Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.148604 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-vhw9g" event={"ID":"9ff75179-ff05-4afd-8037-7c3a38535ed0","Type":"ContainerDied","Data":"f58d271a76c7343c4732044372bb3988108eae100dae1bdc746e4db74151c645"} Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.219481 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2b7b095-f2c4-442e-8df5-0343dca4fb74-trusted-ca\") pod \"image-registry-66df7c8f76-6wz5h\" (UID: \"a2b7b095-f2c4-442e-8df5-0343dca4fb74\") " pod="openshift-image-registry/image-registry-66df7c8f76-6wz5h" Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.222589 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2b7b095-f2c4-442e-8df5-0343dca4fb74-trusted-ca\") pod \"image-registry-66df7c8f76-6wz5h\" (UID: \"a2b7b095-f2c4-442e-8df5-0343dca4fb74\") " pod="openshift-image-registry/image-registry-66df7c8f76-6wz5h" Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.224345 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a2b7b095-f2c4-442e-8df5-0343dca4fb74-registry-tls\") pod \"image-registry-66df7c8f76-6wz5h\" (UID: \"a2b7b095-f2c4-442e-8df5-0343dca4fb74\") " pod="openshift-image-registry/image-registry-66df7c8f76-6wz5h" Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.224381 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a2b7b095-f2c4-442e-8df5-0343dca4fb74-ca-trust-extracted\") pod \"image-registry-66df7c8f76-6wz5h\" (UID: \"a2b7b095-f2c4-442e-8df5-0343dca4fb74\") " pod="openshift-image-registry/image-registry-66df7c8f76-6wz5h" Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.224493 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7psj8\" (UniqueName: \"kubernetes.io/projected/a2b7b095-f2c4-442e-8df5-0343dca4fb74-kube-api-access-7psj8\") pod \"image-registry-66df7c8f76-6wz5h\" (UID: \"a2b7b095-f2c4-442e-8df5-0343dca4fb74\") " pod="openshift-image-registry/image-registry-66df7c8f76-6wz5h" Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.224561 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a2b7b095-f2c4-442e-8df5-0343dca4fb74-registry-certificates\") pod \"image-registry-66df7c8f76-6wz5h\" (UID: \"a2b7b095-f2c4-442e-8df5-0343dca4fb74\") " pod="openshift-image-registry/image-registry-66df7c8f76-6wz5h" Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.224587 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a2b7b095-f2c4-442e-8df5-0343dca4fb74-installation-pull-secrets\") pod \"image-registry-66df7c8f76-6wz5h\" (UID: \"a2b7b095-f2c4-442e-8df5-0343dca4fb74\") " pod="openshift-image-registry/image-registry-66df7c8f76-6wz5h" Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.224604 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a2b7b095-f2c4-442e-8df5-0343dca4fb74-bound-sa-token\") pod \"image-registry-66df7c8f76-6wz5h\" (UID: \"a2b7b095-f2c4-442e-8df5-0343dca4fb74\") " pod="openshift-image-registry/image-registry-66df7c8f76-6wz5h" Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.227719 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a2b7b095-f2c4-442e-8df5-0343dca4fb74-registry-certificates\") pod \"image-registry-66df7c8f76-6wz5h\" (UID: \"a2b7b095-f2c4-442e-8df5-0343dca4fb74\") " pod="openshift-image-registry/image-registry-66df7c8f76-6wz5h" Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.228187 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a2b7b095-f2c4-442e-8df5-0343dca4fb74-ca-trust-extracted\") pod \"image-registry-66df7c8f76-6wz5h\" (UID: \"a2b7b095-f2c4-442e-8df5-0343dca4fb74\") " pod="openshift-image-registry/image-registry-66df7c8f76-6wz5h" Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.233713 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a2b7b095-f2c4-442e-8df5-0343dca4fb74-registry-tls\") pod \"image-registry-66df7c8f76-6wz5h\" (UID: \"a2b7b095-f2c4-442e-8df5-0343dca4fb74\") " pod="openshift-image-registry/image-registry-66df7c8f76-6wz5h" Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.239575 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a2b7b095-f2c4-442e-8df5-0343dca4fb74-installation-pull-secrets\") pod \"image-registry-66df7c8f76-6wz5h\" (UID: \"a2b7b095-f2c4-442e-8df5-0343dca4fb74\") " pod="openshift-image-registry/image-registry-66df7c8f76-6wz5h" Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.250028 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7psj8\" (UniqueName: \"kubernetes.io/projected/a2b7b095-f2c4-442e-8df5-0343dca4fb74-kube-api-access-7psj8\") pod \"image-registry-66df7c8f76-6wz5h\" (UID: \"a2b7b095-f2c4-442e-8df5-0343dca4fb74\") " pod="openshift-image-registry/image-registry-66df7c8f76-6wz5h" Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.252228 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a2b7b095-f2c4-442e-8df5-0343dca4fb74-bound-sa-token\") pod \"image-registry-66df7c8f76-6wz5h\" (UID: \"a2b7b095-f2c4-442e-8df5-0343dca4fb74\") " pod="openshift-image-registry/image-registry-66df7c8f76-6wz5h" Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.285391 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-6wz5h" Dec 05 19:10:13 crc kubenswrapper[4828]: I1205 19:10:13.553899 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-6wz5h"] Dec 05 19:10:13 crc kubenswrapper[4828]: W1205 19:10:13.571237 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2b7b095_f2c4_442e_8df5_0343dca4fb74.slice/crio-46f1ae57acc28936d6d447d56c0357cd5f6823c46d96474bcd1a3f51af6b3ba3 WatchSource:0}: Error finding container 46f1ae57acc28936d6d447d56c0357cd5f6823c46d96474bcd1a3f51af6b3ba3: Status 404 returned error can't find the container with id 46f1ae57acc28936d6d447d56c0357cd5f6823c46d96474bcd1a3f51af6b3ba3 Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.157411 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b85888b7c-tmwml" event={"ID":"6622e706-e3b7-4c7c-923c-6b0ae67b16bb","Type":"ContainerDied","Data":"b3c87809170daac68f2c94a067a0b5eae14ff5c9531ac6e70cab801991b9097c"} Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.158381 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3c87809170daac68f2c94a067a0b5eae14ff5c9531ac6e70cab801991b9097c" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.159459 4828 generic.go:334] "Generic (PLEG): container finished" podID="1e56fecb-3765-4d29-9c94-02257c7e655b" containerID="6d4e30bbabc84f7232212b45aecbe216b028f42ae047437d085672b21c98c7a9" exitCode=0 Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.159544 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tlrr" event={"ID":"1e56fecb-3765-4d29-9c94-02257c7e655b","Type":"ContainerDied","Data":"6d4e30bbabc84f7232212b45aecbe216b028f42ae047437d085672b21c98c7a9"} Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.162770 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-vhw9g" event={"ID":"9ff75179-ff05-4afd-8037-7c3a38535ed0","Type":"ContainerDied","Data":"34dd4a232f6d8ec1a965cbc580a3e0e8b85df246c3570126b3ea77f9aee0ee1f"} Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.162904 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34dd4a232f6d8ec1a965cbc580a3e0e8b85df246c3570126b3ea77f9aee0ee1f" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.165340 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-6wz5h" event={"ID":"a2b7b095-f2c4-442e-8df5-0343dca4fb74","Type":"ContainerStarted","Data":"46f1ae57acc28936d6d447d56c0357cd5f6823c46d96474bcd1a3f51af6b3ba3"} Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.476066 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b85888b7c-tmwml" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.530739 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c5dc4c8b9-ksjsc"] Dec 05 19:10:14 crc kubenswrapper[4828]: E1205 19:10:14.530957 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6622e706-e3b7-4c7c-923c-6b0ae67b16bb" containerName="controller-manager" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.530973 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="6622e706-e3b7-4c7c-923c-6b0ae67b16bb" containerName="controller-manager" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.531081 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="6622e706-e3b7-4c7c-923c-6b0ae67b16bb" containerName="controller-manager" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.531444 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c5dc4c8b9-ksjsc" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.539174 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-vhw9g" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.539943 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c5dc4c8b9-ksjsc"] Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.644113 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6622e706-e3b7-4c7c-923c-6b0ae67b16bb-proxy-ca-bundles\") pod \"6622e706-e3b7-4c7c-923c-6b0ae67b16bb\" (UID: \"6622e706-e3b7-4c7c-923c-6b0ae67b16bb\") " Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.644538 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gntwk\" (UniqueName: \"kubernetes.io/projected/6622e706-e3b7-4c7c-923c-6b0ae67b16bb-kube-api-access-gntwk\") pod \"6622e706-e3b7-4c7c-923c-6b0ae67b16bb\" (UID: \"6622e706-e3b7-4c7c-923c-6b0ae67b16bb\") " Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.644615 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ff75179-ff05-4afd-8037-7c3a38535ed0-config\") pod \"9ff75179-ff05-4afd-8037-7c3a38535ed0\" (UID: \"9ff75179-ff05-4afd-8037-7c3a38535ed0\") " Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.644673 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6622e706-e3b7-4c7c-923c-6b0ae67b16bb-client-ca\") pod \"6622e706-e3b7-4c7c-923c-6b0ae67b16bb\" (UID: \"6622e706-e3b7-4c7c-923c-6b0ae67b16bb\") " Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.644700 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ff75179-ff05-4afd-8037-7c3a38535ed0-serving-cert\") pod \"9ff75179-ff05-4afd-8037-7c3a38535ed0\" (UID: \"9ff75179-ff05-4afd-8037-7c3a38535ed0\") " Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.644729 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6622e706-e3b7-4c7c-923c-6b0ae67b16bb-serving-cert\") pod \"6622e706-e3b7-4c7c-923c-6b0ae67b16bb\" (UID: \"6622e706-e3b7-4c7c-923c-6b0ae67b16bb\") " Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.644750 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6622e706-e3b7-4c7c-923c-6b0ae67b16bb-config\") pod \"6622e706-e3b7-4c7c-923c-6b0ae67b16bb\" (UID: \"6622e706-e3b7-4c7c-923c-6b0ae67b16bb\") " Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.644780 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ff75179-ff05-4afd-8037-7c3a38535ed0-client-ca\") pod \"9ff75179-ff05-4afd-8037-7c3a38535ed0\" (UID: \"9ff75179-ff05-4afd-8037-7c3a38535ed0\") " Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.644805 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psjn5\" (UniqueName: \"kubernetes.io/projected/9ff75179-ff05-4afd-8037-7c3a38535ed0-kube-api-access-psjn5\") pod \"9ff75179-ff05-4afd-8037-7c3a38535ed0\" (UID: \"9ff75179-ff05-4afd-8037-7c3a38535ed0\") " Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.645000 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/709f6913-f56e-4ac0-a261-3cab0932ac4e-client-ca\") pod \"controller-manager-c5dc4c8b9-ksjsc\" (UID: \"709f6913-f56e-4ac0-a261-3cab0932ac4e\") " pod="openshift-controller-manager/controller-manager-c5dc4c8b9-ksjsc" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.645051 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/709f6913-f56e-4ac0-a261-3cab0932ac4e-serving-cert\") pod \"controller-manager-c5dc4c8b9-ksjsc\" (UID: \"709f6913-f56e-4ac0-a261-3cab0932ac4e\") " pod="openshift-controller-manager/controller-manager-c5dc4c8b9-ksjsc" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.645081 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpsrb\" (UniqueName: \"kubernetes.io/projected/709f6913-f56e-4ac0-a261-3cab0932ac4e-kube-api-access-tpsrb\") pod \"controller-manager-c5dc4c8b9-ksjsc\" (UID: \"709f6913-f56e-4ac0-a261-3cab0932ac4e\") " pod="openshift-controller-manager/controller-manager-c5dc4c8b9-ksjsc" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.645161 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/709f6913-f56e-4ac0-a261-3cab0932ac4e-proxy-ca-bundles\") pod \"controller-manager-c5dc4c8b9-ksjsc\" (UID: \"709f6913-f56e-4ac0-a261-3cab0932ac4e\") " pod="openshift-controller-manager/controller-manager-c5dc4c8b9-ksjsc" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.645199 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709f6913-f56e-4ac0-a261-3cab0932ac4e-config\") pod \"controller-manager-c5dc4c8b9-ksjsc\" (UID: \"709f6913-f56e-4ac0-a261-3cab0932ac4e\") " pod="openshift-controller-manager/controller-manager-c5dc4c8b9-ksjsc" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.645217 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6622e706-e3b7-4c7c-923c-6b0ae67b16bb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6622e706-e3b7-4c7c-923c-6b0ae67b16bb" (UID: "6622e706-e3b7-4c7c-923c-6b0ae67b16bb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.645575 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ff75179-ff05-4afd-8037-7c3a38535ed0-client-ca" (OuterVolumeSpecName: "client-ca") pod "9ff75179-ff05-4afd-8037-7c3a38535ed0" (UID: "9ff75179-ff05-4afd-8037-7c3a38535ed0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.645995 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6622e706-e3b7-4c7c-923c-6b0ae67b16bb-config" (OuterVolumeSpecName: "config") pod "6622e706-e3b7-4c7c-923c-6b0ae67b16bb" (UID: "6622e706-e3b7-4c7c-923c-6b0ae67b16bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.646173 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6622e706-e3b7-4c7c-923c-6b0ae67b16bb-client-ca" (OuterVolumeSpecName: "client-ca") pod "6622e706-e3b7-4c7c-923c-6b0ae67b16bb" (UID: "6622e706-e3b7-4c7c-923c-6b0ae67b16bb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.651016 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ff75179-ff05-4afd-8037-7c3a38535ed0-config" (OuterVolumeSpecName: "config") pod "9ff75179-ff05-4afd-8037-7c3a38535ed0" (UID: "9ff75179-ff05-4afd-8037-7c3a38535ed0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.651070 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6622e706-e3b7-4c7c-923c-6b0ae67b16bb-kube-api-access-gntwk" (OuterVolumeSpecName: "kube-api-access-gntwk") pod "6622e706-e3b7-4c7c-923c-6b0ae67b16bb" (UID: "6622e706-e3b7-4c7c-923c-6b0ae67b16bb"). InnerVolumeSpecName "kube-api-access-gntwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.651161 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6622e706-e3b7-4c7c-923c-6b0ae67b16bb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6622e706-e3b7-4c7c-923c-6b0ae67b16bb" (UID: "6622e706-e3b7-4c7c-923c-6b0ae67b16bb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.652174 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ff75179-ff05-4afd-8037-7c3a38535ed0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9ff75179-ff05-4afd-8037-7c3a38535ed0" (UID: "9ff75179-ff05-4afd-8037-7c3a38535ed0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.652414 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ff75179-ff05-4afd-8037-7c3a38535ed0-kube-api-access-psjn5" (OuterVolumeSpecName: "kube-api-access-psjn5") pod "9ff75179-ff05-4afd-8037-7c3a38535ed0" (UID: "9ff75179-ff05-4afd-8037-7c3a38535ed0"). InnerVolumeSpecName "kube-api-access-psjn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.746791 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709f6913-f56e-4ac0-a261-3cab0932ac4e-config\") pod \"controller-manager-c5dc4c8b9-ksjsc\" (UID: \"709f6913-f56e-4ac0-a261-3cab0932ac4e\") " pod="openshift-controller-manager/controller-manager-c5dc4c8b9-ksjsc" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.746857 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/709f6913-f56e-4ac0-a261-3cab0932ac4e-client-ca\") pod \"controller-manager-c5dc4c8b9-ksjsc\" (UID: \"709f6913-f56e-4ac0-a261-3cab0932ac4e\") " pod="openshift-controller-manager/controller-manager-c5dc4c8b9-ksjsc" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.746888 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/709f6913-f56e-4ac0-a261-3cab0932ac4e-serving-cert\") pod \"controller-manager-c5dc4c8b9-ksjsc\" (UID: \"709f6913-f56e-4ac0-a261-3cab0932ac4e\") " pod="openshift-controller-manager/controller-manager-c5dc4c8b9-ksjsc" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.746907 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpsrb\" (UniqueName: \"kubernetes.io/projected/709f6913-f56e-4ac0-a261-3cab0932ac4e-kube-api-access-tpsrb\") pod \"controller-manager-c5dc4c8b9-ksjsc\" (UID: \"709f6913-f56e-4ac0-a261-3cab0932ac4e\") " pod="openshift-controller-manager/controller-manager-c5dc4c8b9-ksjsc" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.746955 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/709f6913-f56e-4ac0-a261-3cab0932ac4e-proxy-ca-bundles\") pod \"controller-manager-c5dc4c8b9-ksjsc\" (UID: \"709f6913-f56e-4ac0-a261-3cab0932ac4e\") " pod="openshift-controller-manager/controller-manager-c5dc4c8b9-ksjsc" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.746989 4828 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6622e706-e3b7-4c7c-923c-6b0ae67b16bb-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.746999 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gntwk\" (UniqueName: \"kubernetes.io/projected/6622e706-e3b7-4c7c-923c-6b0ae67b16bb-kube-api-access-gntwk\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.747008 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ff75179-ff05-4afd-8037-7c3a38535ed0-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.747018 4828 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6622e706-e3b7-4c7c-923c-6b0ae67b16bb-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.747026 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ff75179-ff05-4afd-8037-7c3a38535ed0-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.747033 4828 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6622e706-e3b7-4c7c-923c-6b0ae67b16bb-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.747042 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6622e706-e3b7-4c7c-923c-6b0ae67b16bb-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.747052 4828 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ff75179-ff05-4afd-8037-7c3a38535ed0-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.747060 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psjn5\" (UniqueName: \"kubernetes.io/projected/9ff75179-ff05-4afd-8037-7c3a38535ed0-kube-api-access-psjn5\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.748428 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/709f6913-f56e-4ac0-a261-3cab0932ac4e-proxy-ca-bundles\") pod \"controller-manager-c5dc4c8b9-ksjsc\" (UID: \"709f6913-f56e-4ac0-a261-3cab0932ac4e\") " pod="openshift-controller-manager/controller-manager-c5dc4c8b9-ksjsc" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.749527 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/709f6913-f56e-4ac0-a261-3cab0932ac4e-client-ca\") pod \"controller-manager-c5dc4c8b9-ksjsc\" (UID: \"709f6913-f56e-4ac0-a261-3cab0932ac4e\") " pod="openshift-controller-manager/controller-manager-c5dc4c8b9-ksjsc" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.750284 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709f6913-f56e-4ac0-a261-3cab0932ac4e-config\") pod \"controller-manager-c5dc4c8b9-ksjsc\" (UID: \"709f6913-f56e-4ac0-a261-3cab0932ac4e\") " pod="openshift-controller-manager/controller-manager-c5dc4c8b9-ksjsc" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.754764 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/709f6913-f56e-4ac0-a261-3cab0932ac4e-serving-cert\") pod \"controller-manager-c5dc4c8b9-ksjsc\" (UID: \"709f6913-f56e-4ac0-a261-3cab0932ac4e\") " pod="openshift-controller-manager/controller-manager-c5dc4c8b9-ksjsc" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.767658 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpsrb\" (UniqueName: \"kubernetes.io/projected/709f6913-f56e-4ac0-a261-3cab0932ac4e-kube-api-access-tpsrb\") pod \"controller-manager-c5dc4c8b9-ksjsc\" (UID: \"709f6913-f56e-4ac0-a261-3cab0932ac4e\") " pod="openshift-controller-manager/controller-manager-c5dc4c8b9-ksjsc" Dec 05 19:10:14 crc kubenswrapper[4828]: I1205 19:10:14.846187 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c5dc4c8b9-ksjsc" Dec 05 19:10:15 crc kubenswrapper[4828]: I1205 19:10:15.086637 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c5dc4c8b9-ksjsc"] Dec 05 19:10:15 crc kubenswrapper[4828]: I1205 19:10:15.188920 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztsk4" event={"ID":"8a85d300-213b-4bda-aff7-73bc53e7e246","Type":"ContainerStarted","Data":"8da35f1aba34a446f4be2c5995559e21979b24940f9e9330274d2e5d24f96a62"} Dec 05 19:10:15 crc kubenswrapper[4828]: I1205 19:10:15.190845 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c5dc4c8b9-ksjsc" event={"ID":"709f6913-f56e-4ac0-a261-3cab0932ac4e","Type":"ContainerStarted","Data":"b4a6c0c39653cc860cf33ec0239889c799b827da151c780bfaced7eacd5ab559"} Dec 05 19:10:15 crc kubenswrapper[4828]: I1205 19:10:15.192574 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-6wz5h" event={"ID":"a2b7b095-f2c4-442e-8df5-0343dca4fb74","Type":"ContainerStarted","Data":"964a1ebf33228b4c19f88a89468cef794c9feb2a1d11941d0e66a0d3ef8c523f"} Dec 05 19:10:15 crc kubenswrapper[4828]: I1205 19:10:15.192893 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-6wz5h" Dec 05 19:10:15 crc kubenswrapper[4828]: I1205 19:10:15.204397 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b85888b7c-tmwml" Dec 05 19:10:15 crc kubenswrapper[4828]: I1205 19:10:15.204892 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tlrr" event={"ID":"1e56fecb-3765-4d29-9c94-02257c7e655b","Type":"ContainerStarted","Data":"abf9c812d68b417a5c233d1218c1e088fa4e302576b4d82fca4f091ef8142f79"} Dec 05 19:10:15 crc kubenswrapper[4828]: I1205 19:10:15.205756 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-vhw9g" Dec 05 19:10:15 crc kubenswrapper[4828]: I1205 19:10:15.205987 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ztsk4" podStartSLOduration=3.717000825 podStartE2EDuration="6.205965035s" podCreationTimestamp="2025-12-05 19:10:09 +0000 UTC" firstStartedPulling="2025-12-05 19:10:11.07975777 +0000 UTC m=+388.974980076" lastFinishedPulling="2025-12-05 19:10:13.56872198 +0000 UTC m=+391.463944286" observedRunningTime="2025-12-05 19:10:15.203269694 +0000 UTC m=+393.098492010" watchObservedRunningTime="2025-12-05 19:10:15.205965035 +0000 UTC m=+393.101187341" Dec 05 19:10:15 crc kubenswrapper[4828]: I1205 19:10:15.239441 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-6wz5h" podStartSLOduration=3.239423162 podStartE2EDuration="3.239423162s" podCreationTimestamp="2025-12-05 19:10:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:10:15.221737948 +0000 UTC m=+393.116960254" watchObservedRunningTime="2025-12-05 19:10:15.239423162 +0000 UTC m=+393.134645468" Dec 05 19:10:15 crc kubenswrapper[4828]: I1205 19:10:15.258084 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2tlrr" podStartSLOduration=2.748231854 podStartE2EDuration="5.25806245s" podCreationTimestamp="2025-12-05 19:10:10 +0000 UTC" firstStartedPulling="2025-12-05 19:10:12.093923641 +0000 UTC m=+389.989145947" lastFinishedPulling="2025-12-05 19:10:14.603754237 +0000 UTC m=+392.498976543" observedRunningTime="2025-12-05 19:10:15.241084865 +0000 UTC m=+393.136307171" watchObservedRunningTime="2025-12-05 19:10:15.25806245 +0000 UTC m=+393.153284756" Dec 05 19:10:15 crc kubenswrapper[4828]: I1205 19:10:15.259233 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b85888b7c-tmwml"] Dec 05 19:10:15 crc kubenswrapper[4828]: I1205 19:10:15.263085 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5b85888b7c-tmwml"] Dec 05 19:10:15 crc kubenswrapper[4828]: I1205 19:10:15.271121 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7484d9ddcc-vhw9g"] Dec 05 19:10:15 crc kubenswrapper[4828]: I1205 19:10:15.275837 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7484d9ddcc-vhw9g"] Dec 05 19:10:16 crc kubenswrapper[4828]: I1205 19:10:16.212154 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c5dc4c8b9-ksjsc" event={"ID":"709f6913-f56e-4ac0-a261-3cab0932ac4e","Type":"ContainerStarted","Data":"2b9ef354b0b7fdb2df5a062349574784052e6a48c3fac63bd6252db7189aaac7"} Dec 05 19:10:16 crc kubenswrapper[4828]: I1205 19:10:16.212896 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-c5dc4c8b9-ksjsc" Dec 05 19:10:16 crc kubenswrapper[4828]: I1205 19:10:16.217470 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-c5dc4c8b9-ksjsc" Dec 05 19:10:16 crc kubenswrapper[4828]: I1205 19:10:16.253037 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-c5dc4c8b9-ksjsc" podStartSLOduration=4.253017767 podStartE2EDuration="4.253017767s" podCreationTimestamp="2025-12-05 19:10:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:10:16.234968424 +0000 UTC m=+394.130190730" watchObservedRunningTime="2025-12-05 19:10:16.253017767 +0000 UTC m=+394.148240073" Dec 05 19:10:16 crc kubenswrapper[4828]: I1205 19:10:16.453507 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6622e706-e3b7-4c7c-923c-6b0ae67b16bb" path="/var/lib/kubelet/pods/6622e706-e3b7-4c7c-923c-6b0ae67b16bb/volumes" Dec 05 19:10:16 crc kubenswrapper[4828]: I1205 19:10:16.454088 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ff75179-ff05-4afd-8037-7c3a38535ed0" path="/var/lib/kubelet/pods/9ff75179-ff05-4afd-8037-7c3a38535ed0/volumes" Dec 05 19:10:16 crc kubenswrapper[4828]: I1205 19:10:16.599849 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c9fb57676-m62sr"] Dec 05 19:10:16 crc kubenswrapper[4828]: E1205 19:10:16.600070 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ff75179-ff05-4afd-8037-7c3a38535ed0" containerName="route-controller-manager" Dec 05 19:10:16 crc kubenswrapper[4828]: I1205 19:10:16.600084 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ff75179-ff05-4afd-8037-7c3a38535ed0" containerName="route-controller-manager" Dec 05 19:10:16 crc kubenswrapper[4828]: I1205 19:10:16.600206 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ff75179-ff05-4afd-8037-7c3a38535ed0" containerName="route-controller-manager" Dec 05 19:10:16 crc kubenswrapper[4828]: I1205 19:10:16.600609 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c9fb57676-m62sr" Dec 05 19:10:16 crc kubenswrapper[4828]: I1205 19:10:16.602527 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 05 19:10:16 crc kubenswrapper[4828]: I1205 19:10:16.603055 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 05 19:10:16 crc kubenswrapper[4828]: I1205 19:10:16.603077 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 05 19:10:16 crc kubenswrapper[4828]: I1205 19:10:16.603055 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 05 19:10:16 crc kubenswrapper[4828]: I1205 19:10:16.603265 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 05 19:10:16 crc kubenswrapper[4828]: I1205 19:10:16.604242 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 05 19:10:16 crc kubenswrapper[4828]: I1205 19:10:16.618267 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c9fb57676-m62sr"] Dec 05 19:10:16 crc kubenswrapper[4828]: I1205 19:10:16.671858 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b98s\" (UniqueName: \"kubernetes.io/projected/c7b0807e-de8b-453b-bcfd-bc5b12aa6e16-kube-api-access-6b98s\") pod \"route-controller-manager-c9fb57676-m62sr\" (UID: \"c7b0807e-de8b-453b-bcfd-bc5b12aa6e16\") " pod="openshift-route-controller-manager/route-controller-manager-c9fb57676-m62sr" Dec 05 19:10:16 crc kubenswrapper[4828]: I1205 19:10:16.671983 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7b0807e-de8b-453b-bcfd-bc5b12aa6e16-config\") pod \"route-controller-manager-c9fb57676-m62sr\" (UID: \"c7b0807e-de8b-453b-bcfd-bc5b12aa6e16\") " pod="openshift-route-controller-manager/route-controller-manager-c9fb57676-m62sr" Dec 05 19:10:16 crc kubenswrapper[4828]: I1205 19:10:16.672066 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7b0807e-de8b-453b-bcfd-bc5b12aa6e16-client-ca\") pod \"route-controller-manager-c9fb57676-m62sr\" (UID: \"c7b0807e-de8b-453b-bcfd-bc5b12aa6e16\") " pod="openshift-route-controller-manager/route-controller-manager-c9fb57676-m62sr" Dec 05 19:10:16 crc kubenswrapper[4828]: I1205 19:10:16.672132 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7b0807e-de8b-453b-bcfd-bc5b12aa6e16-serving-cert\") pod \"route-controller-manager-c9fb57676-m62sr\" (UID: \"c7b0807e-de8b-453b-bcfd-bc5b12aa6e16\") " pod="openshift-route-controller-manager/route-controller-manager-c9fb57676-m62sr" Dec 05 19:10:16 crc kubenswrapper[4828]: I1205 19:10:16.787304 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7b0807e-de8b-453b-bcfd-bc5b12aa6e16-serving-cert\") pod \"route-controller-manager-c9fb57676-m62sr\" (UID: \"c7b0807e-de8b-453b-bcfd-bc5b12aa6e16\") " pod="openshift-route-controller-manager/route-controller-manager-c9fb57676-m62sr" Dec 05 19:10:16 crc kubenswrapper[4828]: I1205 19:10:16.787516 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b98s\" (UniqueName: \"kubernetes.io/projected/c7b0807e-de8b-453b-bcfd-bc5b12aa6e16-kube-api-access-6b98s\") pod \"route-controller-manager-c9fb57676-m62sr\" (UID: \"c7b0807e-de8b-453b-bcfd-bc5b12aa6e16\") " pod="openshift-route-controller-manager/route-controller-manager-c9fb57676-m62sr" Dec 05 19:10:16 crc kubenswrapper[4828]: I1205 19:10:16.787553 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7b0807e-de8b-453b-bcfd-bc5b12aa6e16-config\") pod \"route-controller-manager-c9fb57676-m62sr\" (UID: \"c7b0807e-de8b-453b-bcfd-bc5b12aa6e16\") " pod="openshift-route-controller-manager/route-controller-manager-c9fb57676-m62sr" Dec 05 19:10:16 crc kubenswrapper[4828]: I1205 19:10:16.787594 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7b0807e-de8b-453b-bcfd-bc5b12aa6e16-client-ca\") pod \"route-controller-manager-c9fb57676-m62sr\" (UID: \"c7b0807e-de8b-453b-bcfd-bc5b12aa6e16\") " pod="openshift-route-controller-manager/route-controller-manager-c9fb57676-m62sr" Dec 05 19:10:16 crc kubenswrapper[4828]: I1205 19:10:16.788754 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7b0807e-de8b-453b-bcfd-bc5b12aa6e16-client-ca\") pod \"route-controller-manager-c9fb57676-m62sr\" (UID: \"c7b0807e-de8b-453b-bcfd-bc5b12aa6e16\") " pod="openshift-route-controller-manager/route-controller-manager-c9fb57676-m62sr" Dec 05 19:10:16 crc kubenswrapper[4828]: I1205 19:10:16.789381 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7b0807e-de8b-453b-bcfd-bc5b12aa6e16-config\") pod \"route-controller-manager-c9fb57676-m62sr\" (UID: \"c7b0807e-de8b-453b-bcfd-bc5b12aa6e16\") " pod="openshift-route-controller-manager/route-controller-manager-c9fb57676-m62sr" Dec 05 19:10:16 crc kubenswrapper[4828]: I1205 19:10:16.796390 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7b0807e-de8b-453b-bcfd-bc5b12aa6e16-serving-cert\") pod \"route-controller-manager-c9fb57676-m62sr\" (UID: \"c7b0807e-de8b-453b-bcfd-bc5b12aa6e16\") " pod="openshift-route-controller-manager/route-controller-manager-c9fb57676-m62sr" Dec 05 19:10:16 crc kubenswrapper[4828]: I1205 19:10:16.806236 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b98s\" (UniqueName: \"kubernetes.io/projected/c7b0807e-de8b-453b-bcfd-bc5b12aa6e16-kube-api-access-6b98s\") pod \"route-controller-manager-c9fb57676-m62sr\" (UID: \"c7b0807e-de8b-453b-bcfd-bc5b12aa6e16\") " pod="openshift-route-controller-manager/route-controller-manager-c9fb57676-m62sr" Dec 05 19:10:16 crc kubenswrapper[4828]: I1205 19:10:16.916676 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c9fb57676-m62sr" Dec 05 19:10:17 crc kubenswrapper[4828]: I1205 19:10:17.311414 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c9fb57676-m62sr"] Dec 05 19:10:17 crc kubenswrapper[4828]: I1205 19:10:17.764078 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bch5n" Dec 05 19:10:17 crc kubenswrapper[4828]: I1205 19:10:17.764454 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bch5n" Dec 05 19:10:17 crc kubenswrapper[4828]: I1205 19:10:17.805533 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bch5n" Dec 05 19:10:18 crc kubenswrapper[4828]: I1205 19:10:18.223956 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c9fb57676-m62sr" event={"ID":"c7b0807e-de8b-453b-bcfd-bc5b12aa6e16","Type":"ContainerStarted","Data":"a68d59922208b47faf531d5e60182ff68a9b21af0159d69ab587569ada1c4e70"} Dec 05 19:10:18 crc kubenswrapper[4828]: I1205 19:10:18.224004 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c9fb57676-m62sr" event={"ID":"c7b0807e-de8b-453b-bcfd-bc5b12aa6e16","Type":"ContainerStarted","Data":"e0f977c3d33499bd886be79c6ee7b3e8dd327a51835c7588b122d6f2f559820c"} Dec 05 19:10:18 crc kubenswrapper[4828]: I1205 19:10:18.243409 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-c9fb57676-m62sr" podStartSLOduration=6.243393524 podStartE2EDuration="6.243393524s" podCreationTimestamp="2025-12-05 19:10:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:10:18.239746799 +0000 UTC m=+396.134969115" watchObservedRunningTime="2025-12-05 19:10:18.243393524 +0000 UTC m=+396.138615830" Dec 05 19:10:18 crc kubenswrapper[4828]: I1205 19:10:18.262848 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bch5n" Dec 05 19:10:18 crc kubenswrapper[4828]: I1205 19:10:18.346026 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pmljs" Dec 05 19:10:18 crc kubenswrapper[4828]: I1205 19:10:18.346065 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pmljs" Dec 05 19:10:18 crc kubenswrapper[4828]: I1205 19:10:18.396161 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pmljs" Dec 05 19:10:19 crc kubenswrapper[4828]: I1205 19:10:19.229605 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-c9fb57676-m62sr" Dec 05 19:10:19 crc kubenswrapper[4828]: I1205 19:10:19.236841 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-c9fb57676-m62sr" Dec 05 19:10:19 crc kubenswrapper[4828]: I1205 19:10:19.282142 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pmljs" Dec 05 19:10:20 crc kubenswrapper[4828]: I1205 19:10:20.155964 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ztsk4" Dec 05 19:10:20 crc kubenswrapper[4828]: I1205 19:10:20.156032 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ztsk4" Dec 05 19:10:20 crc kubenswrapper[4828]: I1205 19:10:20.199672 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ztsk4" Dec 05 19:10:20 crc kubenswrapper[4828]: I1205 19:10:20.276703 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ztsk4" Dec 05 19:10:20 crc kubenswrapper[4828]: I1205 19:10:20.744672 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2tlrr" Dec 05 19:10:20 crc kubenswrapper[4828]: I1205 19:10:20.745158 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2tlrr" Dec 05 19:10:20 crc kubenswrapper[4828]: I1205 19:10:20.785866 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2tlrr" Dec 05 19:10:21 crc kubenswrapper[4828]: I1205 19:10:21.282924 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2tlrr" Dec 05 19:10:33 crc kubenswrapper[4828]: I1205 19:10:33.293270 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-6wz5h" Dec 05 19:10:33 crc kubenswrapper[4828]: I1205 19:10:33.347434 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wk88t"] Dec 05 19:10:35 crc kubenswrapper[4828]: I1205 19:10:35.259907 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:10:35 crc kubenswrapper[4828]: I1205 19:10:35.259995 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:10:35 crc kubenswrapper[4828]: I1205 19:10:35.260085 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" Dec 05 19:10:35 crc kubenswrapper[4828]: I1205 19:10:35.261175 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d8de933ba36cba5665f56451f60fe62908c403f5937616de58a6e4ebbe2c5830"} pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 19:10:35 crc kubenswrapper[4828]: I1205 19:10:35.261331 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" containerID="cri-o://d8de933ba36cba5665f56451f60fe62908c403f5937616de58a6e4ebbe2c5830" gracePeriod=600 Dec 05 19:10:36 crc kubenswrapper[4828]: I1205 19:10:36.337787 4828 generic.go:334] "Generic (PLEG): container finished" podID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerID="d8de933ba36cba5665f56451f60fe62908c403f5937616de58a6e4ebbe2c5830" exitCode=0 Dec 05 19:10:36 crc kubenswrapper[4828]: I1205 19:10:36.337863 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerDied","Data":"d8de933ba36cba5665f56451f60fe62908c403f5937616de58a6e4ebbe2c5830"} Dec 05 19:10:36 crc kubenswrapper[4828]: I1205 19:10:36.338798 4828 scope.go:117] "RemoveContainer" containerID="01295b144fe480a6357600dc1bc0402a855e69c8338c38ef0d83f8f95d65359e" Dec 05 19:10:37 crc kubenswrapper[4828]: I1205 19:10:37.346303 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerStarted","Data":"8bb75c4c0ebf5117e84bd2908811ecc4d1acf37d442ad533ca9f795d77cc15ba"} Dec 05 19:10:58 crc kubenswrapper[4828]: I1205 19:10:58.403733 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" podUID="17e0a9b8-d746-4a17-a424-122b5c30ce75" containerName="registry" containerID="cri-o://487cbd7007bf7752bc5d7e03a232dd7f3c3742965eefca67c3b85ef2a97c9d42" gracePeriod=30 Dec 05 19:10:58 crc kubenswrapper[4828]: I1205 19:10:58.792799 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:10:58 crc kubenswrapper[4828]: I1205 19:10:58.904739 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"17e0a9b8-d746-4a17-a424-122b5c30ce75\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " Dec 05 19:10:58 crc kubenswrapper[4828]: I1205 19:10:58.904808 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/17e0a9b8-d746-4a17-a424-122b5c30ce75-installation-pull-secrets\") pod \"17e0a9b8-d746-4a17-a424-122b5c30ce75\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " Dec 05 19:10:58 crc kubenswrapper[4828]: I1205 19:10:58.904856 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/17e0a9b8-d746-4a17-a424-122b5c30ce75-bound-sa-token\") pod \"17e0a9b8-d746-4a17-a424-122b5c30ce75\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " Dec 05 19:10:58 crc kubenswrapper[4828]: I1205 19:10:58.904879 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/17e0a9b8-d746-4a17-a424-122b5c30ce75-ca-trust-extracted\") pod \"17e0a9b8-d746-4a17-a424-122b5c30ce75\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " Dec 05 19:10:58 crc kubenswrapper[4828]: I1205 19:10:58.904914 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/17e0a9b8-d746-4a17-a424-122b5c30ce75-registry-certificates\") pod \"17e0a9b8-d746-4a17-a424-122b5c30ce75\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " Dec 05 19:10:58 crc kubenswrapper[4828]: I1205 19:10:58.904939 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sthq8\" (UniqueName: \"kubernetes.io/projected/17e0a9b8-d746-4a17-a424-122b5c30ce75-kube-api-access-sthq8\") pod \"17e0a9b8-d746-4a17-a424-122b5c30ce75\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " Dec 05 19:10:58 crc kubenswrapper[4828]: I1205 19:10:58.904991 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/17e0a9b8-d746-4a17-a424-122b5c30ce75-registry-tls\") pod \"17e0a9b8-d746-4a17-a424-122b5c30ce75\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " Dec 05 19:10:58 crc kubenswrapper[4828]: I1205 19:10:58.905131 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17e0a9b8-d746-4a17-a424-122b5c30ce75-trusted-ca\") pod \"17e0a9b8-d746-4a17-a424-122b5c30ce75\" (UID: \"17e0a9b8-d746-4a17-a424-122b5c30ce75\") " Dec 05 19:10:58 crc kubenswrapper[4828]: I1205 19:10:58.906449 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17e0a9b8-d746-4a17-a424-122b5c30ce75-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "17e0a9b8-d746-4a17-a424-122b5c30ce75" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:10:58 crc kubenswrapper[4828]: I1205 19:10:58.906934 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17e0a9b8-d746-4a17-a424-122b5c30ce75-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "17e0a9b8-d746-4a17-a424-122b5c30ce75" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:10:58 crc kubenswrapper[4828]: I1205 19:10:58.912811 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17e0a9b8-d746-4a17-a424-122b5c30ce75-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "17e0a9b8-d746-4a17-a424-122b5c30ce75" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:10:58 crc kubenswrapper[4828]: I1205 19:10:58.917184 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17e0a9b8-d746-4a17-a424-122b5c30ce75-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "17e0a9b8-d746-4a17-a424-122b5c30ce75" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:10:58 crc kubenswrapper[4828]: I1205 19:10:58.917902 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17e0a9b8-d746-4a17-a424-122b5c30ce75-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "17e0a9b8-d746-4a17-a424-122b5c30ce75" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:10:58 crc kubenswrapper[4828]: I1205 19:10:58.919284 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17e0a9b8-d746-4a17-a424-122b5c30ce75-kube-api-access-sthq8" (OuterVolumeSpecName: "kube-api-access-sthq8") pod "17e0a9b8-d746-4a17-a424-122b5c30ce75" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75"). InnerVolumeSpecName "kube-api-access-sthq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:10:58 crc kubenswrapper[4828]: I1205 19:10:58.923553 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "17e0a9b8-d746-4a17-a424-122b5c30ce75" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 05 19:10:58 crc kubenswrapper[4828]: I1205 19:10:58.926874 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17e0a9b8-d746-4a17-a424-122b5c30ce75-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "17e0a9b8-d746-4a17-a424-122b5c30ce75" (UID: "17e0a9b8-d746-4a17-a424-122b5c30ce75"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:10:59 crc kubenswrapper[4828]: I1205 19:10:59.007013 4828 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/17e0a9b8-d746-4a17-a424-122b5c30ce75-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:59 crc kubenswrapper[4828]: I1205 19:10:59.007066 4828 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17e0a9b8-d746-4a17-a424-122b5c30ce75-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:59 crc kubenswrapper[4828]: I1205 19:10:59.007088 4828 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/17e0a9b8-d746-4a17-a424-122b5c30ce75-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:59 crc kubenswrapper[4828]: I1205 19:10:59.007108 4828 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/17e0a9b8-d746-4a17-a424-122b5c30ce75-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:59 crc kubenswrapper[4828]: I1205 19:10:59.007125 4828 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/17e0a9b8-d746-4a17-a424-122b5c30ce75-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:59 crc kubenswrapper[4828]: I1205 19:10:59.007142 4828 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/17e0a9b8-d746-4a17-a424-122b5c30ce75-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:59 crc kubenswrapper[4828]: I1205 19:10:59.007161 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sthq8\" (UniqueName: \"kubernetes.io/projected/17e0a9b8-d746-4a17-a424-122b5c30ce75-kube-api-access-sthq8\") on node \"crc\" DevicePath \"\"" Dec 05 19:10:59 crc kubenswrapper[4828]: I1205 19:10:59.472349 4828 generic.go:334] "Generic (PLEG): container finished" podID="17e0a9b8-d746-4a17-a424-122b5c30ce75" containerID="487cbd7007bf7752bc5d7e03a232dd7f3c3742965eefca67c3b85ef2a97c9d42" exitCode=0 Dec 05 19:10:59 crc kubenswrapper[4828]: I1205 19:10:59.472431 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" event={"ID":"17e0a9b8-d746-4a17-a424-122b5c30ce75","Type":"ContainerDied","Data":"487cbd7007bf7752bc5d7e03a232dd7f3c3742965eefca67c3b85ef2a97c9d42"} Dec 05 19:10:59 crc kubenswrapper[4828]: I1205 19:10:59.472464 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" event={"ID":"17e0a9b8-d746-4a17-a424-122b5c30ce75","Type":"ContainerDied","Data":"0322e428fd130c5c57be85540610fc781b7637ea49b2765877a0086f7847615b"} Dec 05 19:10:59 crc kubenswrapper[4828]: I1205 19:10:59.472512 4828 scope.go:117] "RemoveContainer" containerID="487cbd7007bf7752bc5d7e03a232dd7f3c3742965eefca67c3b85ef2a97c9d42" Dec 05 19:10:59 crc kubenswrapper[4828]: I1205 19:10:59.472543 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wk88t" Dec 05 19:10:59 crc kubenswrapper[4828]: I1205 19:10:59.497856 4828 scope.go:117] "RemoveContainer" containerID="487cbd7007bf7752bc5d7e03a232dd7f3c3742965eefca67c3b85ef2a97c9d42" Dec 05 19:10:59 crc kubenswrapper[4828]: E1205 19:10:59.498508 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"487cbd7007bf7752bc5d7e03a232dd7f3c3742965eefca67c3b85ef2a97c9d42\": container with ID starting with 487cbd7007bf7752bc5d7e03a232dd7f3c3742965eefca67c3b85ef2a97c9d42 not found: ID does not exist" containerID="487cbd7007bf7752bc5d7e03a232dd7f3c3742965eefca67c3b85ef2a97c9d42" Dec 05 19:10:59 crc kubenswrapper[4828]: I1205 19:10:59.498540 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"487cbd7007bf7752bc5d7e03a232dd7f3c3742965eefca67c3b85ef2a97c9d42"} err="failed to get container status \"487cbd7007bf7752bc5d7e03a232dd7f3c3742965eefca67c3b85ef2a97c9d42\": rpc error: code = NotFound desc = could not find container \"487cbd7007bf7752bc5d7e03a232dd7f3c3742965eefca67c3b85ef2a97c9d42\": container with ID starting with 487cbd7007bf7752bc5d7e03a232dd7f3c3742965eefca67c3b85ef2a97c9d42 not found: ID does not exist" Dec 05 19:10:59 crc kubenswrapper[4828]: I1205 19:10:59.507881 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wk88t"] Dec 05 19:10:59 crc kubenswrapper[4828]: I1205 19:10:59.513673 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wk88t"] Dec 05 19:11:00 crc kubenswrapper[4828]: I1205 19:11:00.458120 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17e0a9b8-d746-4a17-a424-122b5c30ce75" path="/var/lib/kubelet/pods/17e0a9b8-d746-4a17-a424-122b5c30ce75/volumes" Dec 05 19:13:05 crc kubenswrapper[4828]: I1205 19:13:05.260248 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:13:05 crc kubenswrapper[4828]: I1205 19:13:05.261061 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:13:35 crc kubenswrapper[4828]: I1205 19:13:35.259634 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:13:35 crc kubenswrapper[4828]: I1205 19:13:35.260355 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:14:05 crc kubenswrapper[4828]: I1205 19:14:05.260179 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:14:05 crc kubenswrapper[4828]: I1205 19:14:05.260676 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:14:05 crc kubenswrapper[4828]: I1205 19:14:05.260744 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" Dec 05 19:14:05 crc kubenswrapper[4828]: I1205 19:14:05.261571 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8bb75c4c0ebf5117e84bd2908811ecc4d1acf37d442ad533ca9f795d77cc15ba"} pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 19:14:05 crc kubenswrapper[4828]: I1205 19:14:05.261644 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" containerID="cri-o://8bb75c4c0ebf5117e84bd2908811ecc4d1acf37d442ad533ca9f795d77cc15ba" gracePeriod=600 Dec 05 19:14:06 crc kubenswrapper[4828]: I1205 19:14:06.232395 4828 generic.go:334] "Generic (PLEG): container finished" podID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerID="8bb75c4c0ebf5117e84bd2908811ecc4d1acf37d442ad533ca9f795d77cc15ba" exitCode=0 Dec 05 19:14:06 crc kubenswrapper[4828]: I1205 19:14:06.232496 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerDied","Data":"8bb75c4c0ebf5117e84bd2908811ecc4d1acf37d442ad533ca9f795d77cc15ba"} Dec 05 19:14:06 crc kubenswrapper[4828]: I1205 19:14:06.232760 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerStarted","Data":"6e314ace055f344d073229b86b5faa8f9693ed01502a72c37b8b7db2eef860a3"} Dec 05 19:14:06 crc kubenswrapper[4828]: I1205 19:14:06.232787 4828 scope.go:117] "RemoveContainer" containerID="d8de933ba36cba5665f56451f60fe62908c403f5937616de58a6e4ebbe2c5830" Dec 05 19:15:00 crc kubenswrapper[4828]: I1205 19:15:00.187529 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416035-wnqmj"] Dec 05 19:15:00 crc kubenswrapper[4828]: E1205 19:15:00.188331 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e0a9b8-d746-4a17-a424-122b5c30ce75" containerName="registry" Dec 05 19:15:00 crc kubenswrapper[4828]: I1205 19:15:00.188346 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e0a9b8-d746-4a17-a424-122b5c30ce75" containerName="registry" Dec 05 19:15:00 crc kubenswrapper[4828]: I1205 19:15:00.188469 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="17e0a9b8-d746-4a17-a424-122b5c30ce75" containerName="registry" Dec 05 19:15:00 crc kubenswrapper[4828]: I1205 19:15:00.188969 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416035-wnqmj" Dec 05 19:15:00 crc kubenswrapper[4828]: I1205 19:15:00.191981 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 19:15:00 crc kubenswrapper[4828]: I1205 19:15:00.193880 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 19:15:00 crc kubenswrapper[4828]: I1205 19:15:00.203379 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416035-wnqmj"] Dec 05 19:15:00 crc kubenswrapper[4828]: I1205 19:15:00.280735 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grjnr\" (UniqueName: \"kubernetes.io/projected/c5542062-eec5-413d-a23a-8ac6b1338b0c-kube-api-access-grjnr\") pod \"collect-profiles-29416035-wnqmj\" (UID: \"c5542062-eec5-413d-a23a-8ac6b1338b0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416035-wnqmj" Dec 05 19:15:00 crc kubenswrapper[4828]: I1205 19:15:00.280920 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5542062-eec5-413d-a23a-8ac6b1338b0c-config-volume\") pod \"collect-profiles-29416035-wnqmj\" (UID: \"c5542062-eec5-413d-a23a-8ac6b1338b0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416035-wnqmj" Dec 05 19:15:00 crc kubenswrapper[4828]: I1205 19:15:00.280981 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5542062-eec5-413d-a23a-8ac6b1338b0c-secret-volume\") pod \"collect-profiles-29416035-wnqmj\" (UID: \"c5542062-eec5-413d-a23a-8ac6b1338b0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416035-wnqmj" Dec 05 19:15:00 crc kubenswrapper[4828]: I1205 19:15:00.381876 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grjnr\" (UniqueName: \"kubernetes.io/projected/c5542062-eec5-413d-a23a-8ac6b1338b0c-kube-api-access-grjnr\") pod \"collect-profiles-29416035-wnqmj\" (UID: \"c5542062-eec5-413d-a23a-8ac6b1338b0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416035-wnqmj" Dec 05 19:15:00 crc kubenswrapper[4828]: I1205 19:15:00.381989 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5542062-eec5-413d-a23a-8ac6b1338b0c-config-volume\") pod \"collect-profiles-29416035-wnqmj\" (UID: \"c5542062-eec5-413d-a23a-8ac6b1338b0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416035-wnqmj" Dec 05 19:15:00 crc kubenswrapper[4828]: I1205 19:15:00.382055 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5542062-eec5-413d-a23a-8ac6b1338b0c-secret-volume\") pod \"collect-profiles-29416035-wnqmj\" (UID: \"c5542062-eec5-413d-a23a-8ac6b1338b0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416035-wnqmj" Dec 05 19:15:00 crc kubenswrapper[4828]: I1205 19:15:00.383281 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5542062-eec5-413d-a23a-8ac6b1338b0c-config-volume\") pod \"collect-profiles-29416035-wnqmj\" (UID: \"c5542062-eec5-413d-a23a-8ac6b1338b0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416035-wnqmj" Dec 05 19:15:00 crc kubenswrapper[4828]: I1205 19:15:00.387519 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5542062-eec5-413d-a23a-8ac6b1338b0c-secret-volume\") pod \"collect-profiles-29416035-wnqmj\" (UID: \"c5542062-eec5-413d-a23a-8ac6b1338b0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416035-wnqmj" Dec 05 19:15:00 crc kubenswrapper[4828]: I1205 19:15:00.398523 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grjnr\" (UniqueName: \"kubernetes.io/projected/c5542062-eec5-413d-a23a-8ac6b1338b0c-kube-api-access-grjnr\") pod \"collect-profiles-29416035-wnqmj\" (UID: \"c5542062-eec5-413d-a23a-8ac6b1338b0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416035-wnqmj" Dec 05 19:15:00 crc kubenswrapper[4828]: I1205 19:15:00.518337 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416035-wnqmj" Dec 05 19:15:00 crc kubenswrapper[4828]: I1205 19:15:00.696748 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416035-wnqmj"] Dec 05 19:15:01 crc kubenswrapper[4828]: I1205 19:15:01.610159 4828 generic.go:334] "Generic (PLEG): container finished" podID="c5542062-eec5-413d-a23a-8ac6b1338b0c" containerID="6134daa365957c5e50da988d77f84d2f4b6f05839521653242e508d3d5882c48" exitCode=0 Dec 05 19:15:01 crc kubenswrapper[4828]: I1205 19:15:01.610201 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416035-wnqmj" event={"ID":"c5542062-eec5-413d-a23a-8ac6b1338b0c","Type":"ContainerDied","Data":"6134daa365957c5e50da988d77f84d2f4b6f05839521653242e508d3d5882c48"} Dec 05 19:15:01 crc kubenswrapper[4828]: I1205 19:15:01.610228 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416035-wnqmj" event={"ID":"c5542062-eec5-413d-a23a-8ac6b1338b0c","Type":"ContainerStarted","Data":"26f9a0a54059cc720807fb26d9583b5a99eec20eca85c2454011cfffd2925f51"} Dec 05 19:15:02 crc kubenswrapper[4828]: I1205 19:15:02.847577 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416035-wnqmj" Dec 05 19:15:02 crc kubenswrapper[4828]: I1205 19:15:02.910784 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grjnr\" (UniqueName: \"kubernetes.io/projected/c5542062-eec5-413d-a23a-8ac6b1338b0c-kube-api-access-grjnr\") pod \"c5542062-eec5-413d-a23a-8ac6b1338b0c\" (UID: \"c5542062-eec5-413d-a23a-8ac6b1338b0c\") " Dec 05 19:15:02 crc kubenswrapper[4828]: I1205 19:15:02.910879 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5542062-eec5-413d-a23a-8ac6b1338b0c-secret-volume\") pod \"c5542062-eec5-413d-a23a-8ac6b1338b0c\" (UID: \"c5542062-eec5-413d-a23a-8ac6b1338b0c\") " Dec 05 19:15:02 crc kubenswrapper[4828]: I1205 19:15:02.910934 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5542062-eec5-413d-a23a-8ac6b1338b0c-config-volume\") pod \"c5542062-eec5-413d-a23a-8ac6b1338b0c\" (UID: \"c5542062-eec5-413d-a23a-8ac6b1338b0c\") " Dec 05 19:15:02 crc kubenswrapper[4828]: I1205 19:15:02.911705 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5542062-eec5-413d-a23a-8ac6b1338b0c-config-volume" (OuterVolumeSpecName: "config-volume") pod "c5542062-eec5-413d-a23a-8ac6b1338b0c" (UID: "c5542062-eec5-413d-a23a-8ac6b1338b0c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:15:02 crc kubenswrapper[4828]: I1205 19:15:02.916291 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5542062-eec5-413d-a23a-8ac6b1338b0c-kube-api-access-grjnr" (OuterVolumeSpecName: "kube-api-access-grjnr") pod "c5542062-eec5-413d-a23a-8ac6b1338b0c" (UID: "c5542062-eec5-413d-a23a-8ac6b1338b0c"). InnerVolumeSpecName "kube-api-access-grjnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:15:02 crc kubenswrapper[4828]: I1205 19:15:02.917948 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5542062-eec5-413d-a23a-8ac6b1338b0c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c5542062-eec5-413d-a23a-8ac6b1338b0c" (UID: "c5542062-eec5-413d-a23a-8ac6b1338b0c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:15:03 crc kubenswrapper[4828]: I1205 19:15:03.012602 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grjnr\" (UniqueName: \"kubernetes.io/projected/c5542062-eec5-413d-a23a-8ac6b1338b0c-kube-api-access-grjnr\") on node \"crc\" DevicePath \"\"" Dec 05 19:15:03 crc kubenswrapper[4828]: I1205 19:15:03.012652 4828 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5542062-eec5-413d-a23a-8ac6b1338b0c-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 19:15:03 crc kubenswrapper[4828]: I1205 19:15:03.012666 4828 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5542062-eec5-413d-a23a-8ac6b1338b0c-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 19:15:03 crc kubenswrapper[4828]: I1205 19:15:03.623955 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416035-wnqmj" event={"ID":"c5542062-eec5-413d-a23a-8ac6b1338b0c","Type":"ContainerDied","Data":"26f9a0a54059cc720807fb26d9583b5a99eec20eca85c2454011cfffd2925f51"} Dec 05 19:15:03 crc kubenswrapper[4828]: I1205 19:15:03.624005 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26f9a0a54059cc720807fb26d9583b5a99eec20eca85c2454011cfffd2925f51" Dec 05 19:15:03 crc kubenswrapper[4828]: I1205 19:15:03.624029 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416035-wnqmj" Dec 05 19:15:36 crc kubenswrapper[4828]: I1205 19:15:36.731888 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-49xx5"] Dec 05 19:15:36 crc kubenswrapper[4828]: E1205 19:15:36.732727 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5542062-eec5-413d-a23a-8ac6b1338b0c" containerName="collect-profiles" Dec 05 19:15:36 crc kubenswrapper[4828]: I1205 19:15:36.732743 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5542062-eec5-413d-a23a-8ac6b1338b0c" containerName="collect-profiles" Dec 05 19:15:36 crc kubenswrapper[4828]: I1205 19:15:36.732882 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5542062-eec5-413d-a23a-8ac6b1338b0c" containerName="collect-profiles" Dec 05 19:15:36 crc kubenswrapper[4828]: I1205 19:15:36.733337 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-49xx5" Dec 05 19:15:36 crc kubenswrapper[4828]: I1205 19:15:36.735202 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 05 19:15:36 crc kubenswrapper[4828]: I1205 19:15:36.735250 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 05 19:15:36 crc kubenswrapper[4828]: I1205 19:15:36.736240 4828 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-j4b52" Dec 05 19:15:36 crc kubenswrapper[4828]: I1205 19:15:36.743862 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-7srhj"] Dec 05 19:15:36 crc kubenswrapper[4828]: I1205 19:15:36.744643 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-7srhj" Dec 05 19:15:36 crc kubenswrapper[4828]: I1205 19:15:36.749714 4828 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-zpvfh" Dec 05 19:15:36 crc kubenswrapper[4828]: I1205 19:15:36.758691 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-7srhj"] Dec 05 19:15:36 crc kubenswrapper[4828]: I1205 19:15:36.769813 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-n596v"] Dec 05 19:15:36 crc kubenswrapper[4828]: I1205 19:15:36.770693 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-n596v" Dec 05 19:15:36 crc kubenswrapper[4828]: I1205 19:15:36.773567 4828 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-v7vtw" Dec 05 19:15:36 crc kubenswrapper[4828]: I1205 19:15:36.787313 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-n596v"] Dec 05 19:15:36 crc kubenswrapper[4828]: I1205 19:15:36.798977 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-49xx5"] Dec 05 19:15:36 crc kubenswrapper[4828]: I1205 19:15:36.871842 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w28jq\" (UniqueName: \"kubernetes.io/projected/c8cac37c-e093-48f2-b1da-eaf62bf95bfd-kube-api-access-w28jq\") pod \"cert-manager-cainjector-7f985d654d-49xx5\" (UID: \"c8cac37c-e093-48f2-b1da-eaf62bf95bfd\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-49xx5" Dec 05 19:15:36 crc kubenswrapper[4828]: I1205 19:15:36.871942 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckbwj\" (UniqueName: \"kubernetes.io/projected/3d6b347b-b532-43b5-b0d4-8c40b7962156-kube-api-access-ckbwj\") pod \"cert-manager-5b446d88c5-7srhj\" (UID: \"3d6b347b-b532-43b5-b0d4-8c40b7962156\") " pod="cert-manager/cert-manager-5b446d88c5-7srhj" Dec 05 19:15:36 crc kubenswrapper[4828]: I1205 19:15:36.871973 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxr84\" (UniqueName: \"kubernetes.io/projected/2fcb84c0-5e6a-45ee-9c06-f0a12a1ef15b-kube-api-access-mxr84\") pod \"cert-manager-webhook-5655c58dd6-n596v\" (UID: \"2fcb84c0-5e6a-45ee-9c06-f0a12a1ef15b\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-n596v" Dec 05 19:15:36 crc kubenswrapper[4828]: I1205 19:15:36.973629 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w28jq\" (UniqueName: \"kubernetes.io/projected/c8cac37c-e093-48f2-b1da-eaf62bf95bfd-kube-api-access-w28jq\") pod \"cert-manager-cainjector-7f985d654d-49xx5\" (UID: \"c8cac37c-e093-48f2-b1da-eaf62bf95bfd\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-49xx5" Dec 05 19:15:36 crc kubenswrapper[4828]: I1205 19:15:36.973902 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckbwj\" (UniqueName: \"kubernetes.io/projected/3d6b347b-b532-43b5-b0d4-8c40b7962156-kube-api-access-ckbwj\") pod \"cert-manager-5b446d88c5-7srhj\" (UID: \"3d6b347b-b532-43b5-b0d4-8c40b7962156\") " pod="cert-manager/cert-manager-5b446d88c5-7srhj" Dec 05 19:15:36 crc kubenswrapper[4828]: I1205 19:15:36.973983 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxr84\" (UniqueName: \"kubernetes.io/projected/2fcb84c0-5e6a-45ee-9c06-f0a12a1ef15b-kube-api-access-mxr84\") pod \"cert-manager-webhook-5655c58dd6-n596v\" (UID: \"2fcb84c0-5e6a-45ee-9c06-f0a12a1ef15b\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-n596v" Dec 05 19:15:36 crc kubenswrapper[4828]: I1205 19:15:36.991393 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxr84\" (UniqueName: \"kubernetes.io/projected/2fcb84c0-5e6a-45ee-9c06-f0a12a1ef15b-kube-api-access-mxr84\") pod \"cert-manager-webhook-5655c58dd6-n596v\" (UID: \"2fcb84c0-5e6a-45ee-9c06-f0a12a1ef15b\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-n596v" Dec 05 19:15:36 crc kubenswrapper[4828]: I1205 19:15:36.991412 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckbwj\" (UniqueName: \"kubernetes.io/projected/3d6b347b-b532-43b5-b0d4-8c40b7962156-kube-api-access-ckbwj\") pod \"cert-manager-5b446d88c5-7srhj\" (UID: \"3d6b347b-b532-43b5-b0d4-8c40b7962156\") " pod="cert-manager/cert-manager-5b446d88c5-7srhj" Dec 05 19:15:36 crc kubenswrapper[4828]: I1205 19:15:36.991464 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w28jq\" (UniqueName: \"kubernetes.io/projected/c8cac37c-e093-48f2-b1da-eaf62bf95bfd-kube-api-access-w28jq\") pod \"cert-manager-cainjector-7f985d654d-49xx5\" (UID: \"c8cac37c-e093-48f2-b1da-eaf62bf95bfd\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-49xx5" Dec 05 19:15:37 crc kubenswrapper[4828]: I1205 19:15:37.054425 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-49xx5" Dec 05 19:15:37 crc kubenswrapper[4828]: I1205 19:15:37.062922 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-7srhj" Dec 05 19:15:37 crc kubenswrapper[4828]: I1205 19:15:37.086238 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-n596v" Dec 05 19:15:37 crc kubenswrapper[4828]: I1205 19:15:37.293751 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-49xx5"] Dec 05 19:15:37 crc kubenswrapper[4828]: I1205 19:15:37.312517 4828 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 19:15:37 crc kubenswrapper[4828]: I1205 19:15:37.329282 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-7srhj"] Dec 05 19:15:37 crc kubenswrapper[4828]: W1205 19:15:37.334798 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d6b347b_b532_43b5_b0d4_8c40b7962156.slice/crio-6daee6d82eb10c0dc1ef89a60aa8b0cb7230ee616f0051e085e65f425561fc51 WatchSource:0}: Error finding container 6daee6d82eb10c0dc1ef89a60aa8b0cb7230ee616f0051e085e65f425561fc51: Status 404 returned error can't find the container with id 6daee6d82eb10c0dc1ef89a60aa8b0cb7230ee616f0051e085e65f425561fc51 Dec 05 19:15:37 crc kubenswrapper[4828]: I1205 19:15:37.360548 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-n596v"] Dec 05 19:15:37 crc kubenswrapper[4828]: I1205 19:15:37.836801 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-n596v" event={"ID":"2fcb84c0-5e6a-45ee-9c06-f0a12a1ef15b","Type":"ContainerStarted","Data":"17c9743cb8331fe6884eaf4bc6128c3512376a336e65d30d9f1e023665e4735b"} Dec 05 19:15:37 crc kubenswrapper[4828]: I1205 19:15:37.838136 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-7srhj" event={"ID":"3d6b347b-b532-43b5-b0d4-8c40b7962156","Type":"ContainerStarted","Data":"6daee6d82eb10c0dc1ef89a60aa8b0cb7230ee616f0051e085e65f425561fc51"} Dec 05 19:15:37 crc kubenswrapper[4828]: I1205 19:15:37.839097 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-49xx5" event={"ID":"c8cac37c-e093-48f2-b1da-eaf62bf95bfd","Type":"ContainerStarted","Data":"7abbf0a5f70f63ee7fa0041236476ec5a388181aded2153686e411e0045c6dd3"} Dec 05 19:15:40 crc kubenswrapper[4828]: I1205 19:15:40.865798 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-7srhj" event={"ID":"3d6b347b-b532-43b5-b0d4-8c40b7962156","Type":"ContainerStarted","Data":"56a9ed5730c2469f3e7fb7cf29f4ba67b55777799fb091fed0cec6a133e8dc97"} Dec 05 19:15:40 crc kubenswrapper[4828]: I1205 19:15:40.868172 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-49xx5" event={"ID":"c8cac37c-e093-48f2-b1da-eaf62bf95bfd","Type":"ContainerStarted","Data":"6b9a3725ab4ea9396dd268c5fbd5bfa915bfd60ce94e2e1f045d69ce97d0f5f9"} Dec 05 19:15:40 crc kubenswrapper[4828]: I1205 19:15:40.871023 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-n596v" event={"ID":"2fcb84c0-5e6a-45ee-9c06-f0a12a1ef15b","Type":"ContainerStarted","Data":"c2e2d2d75b0699ddbd0e2cbe7114c4c02889cd3d0cab9efa2e2aad657e930f31"} Dec 05 19:15:40 crc kubenswrapper[4828]: I1205 19:15:40.871238 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-n596v" Dec 05 19:15:40 crc kubenswrapper[4828]: I1205 19:15:40.908787 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-7srhj" podStartSLOduration=1.8694882800000001 podStartE2EDuration="4.908767686s" podCreationTimestamp="2025-12-05 19:15:36 +0000 UTC" firstStartedPulling="2025-12-05 19:15:37.340674914 +0000 UTC m=+715.235897230" lastFinishedPulling="2025-12-05 19:15:40.37995433 +0000 UTC m=+718.275176636" observedRunningTime="2025-12-05 19:15:40.888008457 +0000 UTC m=+718.783230763" watchObservedRunningTime="2025-12-05 19:15:40.908767686 +0000 UTC m=+718.803989992" Dec 05 19:15:40 crc kubenswrapper[4828]: I1205 19:15:40.931352 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-49xx5" podStartSLOduration=1.809547905 podStartE2EDuration="4.931327004s" podCreationTimestamp="2025-12-05 19:15:36 +0000 UTC" firstStartedPulling="2025-12-05 19:15:37.312130005 +0000 UTC m=+715.207352311" lastFinishedPulling="2025-12-05 19:15:40.433909104 +0000 UTC m=+718.329131410" observedRunningTime="2025-12-05 19:15:40.904515132 +0000 UTC m=+718.799737448" watchObservedRunningTime="2025-12-05 19:15:40.931327004 +0000 UTC m=+718.826549340" Dec 05 19:15:40 crc kubenswrapper[4828]: I1205 19:15:40.932112 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-n596v" podStartSLOduration=1.916669131 podStartE2EDuration="4.932104735s" podCreationTimestamp="2025-12-05 19:15:36 +0000 UTC" firstStartedPulling="2025-12-05 19:15:37.364444184 +0000 UTC m=+715.259666490" lastFinishedPulling="2025-12-05 19:15:40.379879788 +0000 UTC m=+718.275102094" observedRunningTime="2025-12-05 19:15:40.922171927 +0000 UTC m=+718.817394253" watchObservedRunningTime="2025-12-05 19:15:40.932104735 +0000 UTC m=+718.827327061" Dec 05 19:15:42 crc kubenswrapper[4828]: I1205 19:15:42.722759 4828 scope.go:117] "RemoveContainer" containerID="37b4d8e62b5fb2c1b122d3caa56682705429ed0ab53d41440cdd8cd47a57d7a8" Dec 05 19:15:42 crc kubenswrapper[4828]: I1205 19:15:42.749604 4828 scope.go:117] "RemoveContainer" containerID="f58d271a76c7343c4732044372bb3988108eae100dae1bdc746e4db74151c645" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.090886 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-n596v" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.429715 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tzshq"] Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.446124 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="ovn-controller" containerID="cri-o://6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424" gracePeriod=30 Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.446220 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5" gracePeriod=30 Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.446259 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="kube-rbac-proxy-node" containerID="cri-o://6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894" gracePeriod=30 Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.446374 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="ovn-acl-logging" containerID="cri-o://aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9" gracePeriod=30 Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.446413 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="northd" containerID="cri-o://66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08" gracePeriod=30 Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.446220 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="nbdb" containerID="cri-o://6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6" gracePeriod=30 Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.446592 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="sbdb" containerID="cri-o://92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704" gracePeriod=30 Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.488999 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="ovnkube-controller" containerID="cri-o://425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f" gracePeriod=30 Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.845147 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tzshq_1be569ff-0725-412f-ac1a-da4f5077bc17/ovnkube-controller/3.log" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.848221 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tzshq_1be569ff-0725-412f-ac1a-da4f5077bc17/ovn-acl-logging/0.log" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.848884 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tzshq_1be569ff-0725-412f-ac1a-da4f5077bc17/ovn-controller/0.log" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.849419 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.907325 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2p9rv"] Dec 05 19:15:47 crc kubenswrapper[4828]: E1205 19:15:47.907646 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="ovnkube-controller" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.907668 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="ovnkube-controller" Dec 05 19:15:47 crc kubenswrapper[4828]: E1205 19:15:47.907680 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="ovnkube-controller" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.907691 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="ovnkube-controller" Dec 05 19:15:47 crc kubenswrapper[4828]: E1205 19:15:47.907702 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="ovnkube-controller" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.907748 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="ovnkube-controller" Dec 05 19:15:47 crc kubenswrapper[4828]: E1205 19:15:47.907761 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="kubecfg-setup" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.907770 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="kubecfg-setup" Dec 05 19:15:47 crc kubenswrapper[4828]: E1205 19:15:47.907890 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="nbdb" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.907905 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="nbdb" Dec 05 19:15:47 crc kubenswrapper[4828]: E1205 19:15:47.907921 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="ovn-acl-logging" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.907933 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="ovn-acl-logging" Dec 05 19:15:47 crc kubenswrapper[4828]: E1205 19:15:47.907979 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="northd" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.907992 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="northd" Dec 05 19:15:47 crc kubenswrapper[4828]: E1205 19:15:47.908010 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="kube-rbac-proxy-node" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.908019 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="kube-rbac-proxy-node" Dec 05 19:15:47 crc kubenswrapper[4828]: E1205 19:15:47.908068 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="kube-rbac-proxy-ovn-metrics" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.908079 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="kube-rbac-proxy-ovn-metrics" Dec 05 19:15:47 crc kubenswrapper[4828]: E1205 19:15:47.908096 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="ovn-controller" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.908135 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="ovn-controller" Dec 05 19:15:47 crc kubenswrapper[4828]: E1205 19:15:47.908154 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="sbdb" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.908164 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="sbdb" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.910924 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="kube-rbac-proxy-ovn-metrics" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.910953 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="ovnkube-controller" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.910968 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="ovnkube-controller" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.910984 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="nbdb" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.911001 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="ovn-controller" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.911010 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="northd" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.911026 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="ovn-acl-logging" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.911037 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="kube-rbac-proxy-node" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.911048 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="sbdb" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.911057 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="ovnkube-controller" Dec 05 19:15:47 crc kubenswrapper[4828]: E1205 19:15:47.911178 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="ovnkube-controller" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.911189 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="ovnkube-controller" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.911299 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="ovnkube-controller" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.911310 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="ovnkube-controller" Dec 05 19:15:47 crc kubenswrapper[4828]: E1205 19:15:47.911415 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="ovnkube-controller" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.911427 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerName="ovnkube-controller" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.913735 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ksv4w_e927a669-7d9d-442a-b020-339804e95af2/kube-multus/2.log" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.914461 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ksv4w_e927a669-7d9d-442a-b020-339804e95af2/kube-multus/1.log" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.914510 4828 generic.go:334] "Generic (PLEG): container finished" podID="e927a669-7d9d-442a-b020-339804e95af2" containerID="f0c1e0c0274d4cf63dbe8ececdf93484842b90a7184f096364b27673f0f76250" exitCode=2 Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.914594 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.914596 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ksv4w" event={"ID":"e927a669-7d9d-442a-b020-339804e95af2","Type":"ContainerDied","Data":"f0c1e0c0274d4cf63dbe8ececdf93484842b90a7184f096364b27673f0f76250"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.914733 4828 scope.go:117] "RemoveContainer" containerID="836afc5e512e0143f7845dcdb8e4ca67de1b0558e78ff4e96b2674810b4152d5" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.915193 4828 scope.go:117] "RemoveContainer" containerID="f0c1e0c0274d4cf63dbe8ececdf93484842b90a7184f096364b27673f0f76250" Dec 05 19:15:47 crc kubenswrapper[4828]: E1205 19:15:47.915355 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-ksv4w_openshift-multus(e927a669-7d9d-442a-b020-339804e95af2)\"" pod="openshift-multus/multus-ksv4w" podUID="e927a669-7d9d-442a-b020-339804e95af2" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.918069 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tzshq_1be569ff-0725-412f-ac1a-da4f5077bc17/ovnkube-controller/3.log" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.920448 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tzshq_1be569ff-0725-412f-ac1a-da4f5077bc17/ovn-acl-logging/0.log" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.921640 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tzshq_1be569ff-0725-412f-ac1a-da4f5077bc17/ovn-controller/0.log" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.922785 4828 generic.go:334] "Generic (PLEG): container finished" podID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerID="425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f" exitCode=0 Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.922812 4828 generic.go:334] "Generic (PLEG): container finished" podID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerID="92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704" exitCode=0 Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.922840 4828 generic.go:334] "Generic (PLEG): container finished" podID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerID="6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6" exitCode=0 Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.922849 4828 generic.go:334] "Generic (PLEG): container finished" podID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerID="66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08" exitCode=0 Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.922855 4828 generic.go:334] "Generic (PLEG): container finished" podID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerID="46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5" exitCode=0 Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.922861 4828 generic.go:334] "Generic (PLEG): container finished" podID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerID="6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894" exitCode=0 Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.922869 4828 generic.go:334] "Generic (PLEG): container finished" podID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerID="aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9" exitCode=143 Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.922878 4828 generic.go:334] "Generic (PLEG): container finished" podID="1be569ff-0725-412f-ac1a-da4f5077bc17" containerID="6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424" exitCode=143 Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.922896 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" event={"ID":"1be569ff-0725-412f-ac1a-da4f5077bc17","Type":"ContainerDied","Data":"425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.922919 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" event={"ID":"1be569ff-0725-412f-ac1a-da4f5077bc17","Type":"ContainerDied","Data":"92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.922931 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" event={"ID":"1be569ff-0725-412f-ac1a-da4f5077bc17","Type":"ContainerDied","Data":"6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.922940 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" event={"ID":"1be569ff-0725-412f-ac1a-da4f5077bc17","Type":"ContainerDied","Data":"66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.922951 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" event={"ID":"1be569ff-0725-412f-ac1a-da4f5077bc17","Type":"ContainerDied","Data":"46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.922961 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" event={"ID":"1be569ff-0725-412f-ac1a-da4f5077bc17","Type":"ContainerDied","Data":"6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.922971 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.922980 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.922985 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.922990 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.922996 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923001 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923007 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923012 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923017 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923023 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923030 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" event={"ID":"1be569ff-0725-412f-ac1a-da4f5077bc17","Type":"ContainerDied","Data":"aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923037 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923043 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923048 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923053 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923059 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923064 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923069 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923074 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923079 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923084 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923090 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" event={"ID":"1be569ff-0725-412f-ac1a-da4f5077bc17","Type":"ContainerDied","Data":"6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923097 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923104 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923110 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923116 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923121 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923126 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923131 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923137 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923144 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923149 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923156 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" event={"ID":"1be569ff-0725-412f-ac1a-da4f5077bc17","Type":"ContainerDied","Data":"6d3964cc67a3362ffbab0d3a0a1b8ab5c14cbbd8293031756ffc983961cc5b35"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923163 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923170 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923175 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923180 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923185 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923191 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923196 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923202 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923207 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923212 4828 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407"} Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.923282 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tzshq" Dec 05 19:15:47 crc kubenswrapper[4828]: I1205 19:15:47.996974 4828 scope.go:117] "RemoveContainer" containerID="425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.011724 4828 scope.go:117] "RemoveContainer" containerID="b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.020367 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-kubelet\") pod \"1be569ff-0725-412f-ac1a-da4f5077bc17\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.020472 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-systemd-units\") pod \"1be569ff-0725-412f-ac1a-da4f5077bc17\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.020562 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1be569ff-0725-412f-ac1a-da4f5077bc17-ovnkube-config\") pod \"1be569ff-0725-412f-ac1a-da4f5077bc17\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.020600 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "1be569ff-0725-412f-ac1a-da4f5077bc17" (UID: "1be569ff-0725-412f-ac1a-da4f5077bc17"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.020665 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "1be569ff-0725-412f-ac1a-da4f5077bc17" (UID: "1be569ff-0725-412f-ac1a-da4f5077bc17"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.020628 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-run-openvswitch\") pod \"1be569ff-0725-412f-ac1a-da4f5077bc17\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.020781 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1be569ff-0725-412f-ac1a-da4f5077bc17-ovnkube-script-lib\") pod \"1be569ff-0725-412f-ac1a-da4f5077bc17\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.020900 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1be569ff-0725-412f-ac1a-da4f5077bc17-env-overrides\") pod \"1be569ff-0725-412f-ac1a-da4f5077bc17\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.020978 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "1be569ff-0725-412f-ac1a-da4f5077bc17" (UID: "1be569ff-0725-412f-ac1a-da4f5077bc17"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.021059 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1be569ff-0725-412f-ac1a-da4f5077bc17-ovn-node-metrics-cert\") pod \"1be569ff-0725-412f-ac1a-da4f5077bc17\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.021121 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-run-netns\") pod \"1be569ff-0725-412f-ac1a-da4f5077bc17\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.021182 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-log-socket\") pod \"1be569ff-0725-412f-ac1a-da4f5077bc17\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.021260 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-run-ovn\") pod \"1be569ff-0725-412f-ac1a-da4f5077bc17\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.021350 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-etc-openvswitch\") pod \"1be569ff-0725-412f-ac1a-da4f5077bc17\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.021419 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-cni-bin\") pod \"1be569ff-0725-412f-ac1a-da4f5077bc17\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.021275 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1be569ff-0725-412f-ac1a-da4f5077bc17-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "1be569ff-0725-412f-ac1a-da4f5077bc17" (UID: "1be569ff-0725-412f-ac1a-da4f5077bc17"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.021304 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-log-socket" (OuterVolumeSpecName: "log-socket") pod "1be569ff-0725-412f-ac1a-da4f5077bc17" (UID: "1be569ff-0725-412f-ac1a-da4f5077bc17"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.021280 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "1be569ff-0725-412f-ac1a-da4f5077bc17" (UID: "1be569ff-0725-412f-ac1a-da4f5077bc17"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.021327 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "1be569ff-0725-412f-ac1a-da4f5077bc17" (UID: "1be569ff-0725-412f-ac1a-da4f5077bc17"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.021420 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "1be569ff-0725-412f-ac1a-da4f5077bc17" (UID: "1be569ff-0725-412f-ac1a-da4f5077bc17"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.021469 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "1be569ff-0725-412f-ac1a-da4f5077bc17" (UID: "1be569ff-0725-412f-ac1a-da4f5077bc17"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.021536 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1be569ff-0725-412f-ac1a-da4f5077bc17-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "1be569ff-0725-412f-ac1a-da4f5077bc17" (UID: "1be569ff-0725-412f-ac1a-da4f5077bc17"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.021619 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "1be569ff-0725-412f-ac1a-da4f5077bc17" (UID: "1be569ff-0725-412f-ac1a-da4f5077bc17"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.021845 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-var-lib-openvswitch\") pod \"1be569ff-0725-412f-ac1a-da4f5077bc17\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.021941 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-cni-netd\") pod \"1be569ff-0725-412f-ac1a-da4f5077bc17\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.021971 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "1be569ff-0725-412f-ac1a-da4f5077bc17" (UID: "1be569ff-0725-412f-ac1a-da4f5077bc17"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.022011 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-slash\") pod \"1be569ff-0725-412f-ac1a-da4f5077bc17\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.022068 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1be569ff-0725-412f-ac1a-da4f5077bc17-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "1be569ff-0725-412f-ac1a-da4f5077bc17" (UID: "1be569ff-0725-412f-ac1a-da4f5077bc17"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.022075 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-run-ovn-kubernetes\") pod \"1be569ff-0725-412f-ac1a-da4f5077bc17\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.022099 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-run-systemd\") pod \"1be569ff-0725-412f-ac1a-da4f5077bc17\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.022116 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-node-log\") pod \"1be569ff-0725-412f-ac1a-da4f5077bc17\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.022139 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-var-lib-cni-networks-ovn-kubernetes\") pod \"1be569ff-0725-412f-ac1a-da4f5077bc17\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.022164 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v24dk\" (UniqueName: \"kubernetes.io/projected/1be569ff-0725-412f-ac1a-da4f5077bc17-kube-api-access-v24dk\") pod \"1be569ff-0725-412f-ac1a-da4f5077bc17\" (UID: \"1be569ff-0725-412f-ac1a-da4f5077bc17\") " Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.022178 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-node-log" (OuterVolumeSpecName: "node-log") pod "1be569ff-0725-412f-ac1a-da4f5077bc17" (UID: "1be569ff-0725-412f-ac1a-da4f5077bc17"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.022206 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "1be569ff-0725-412f-ac1a-da4f5077bc17" (UID: "1be569ff-0725-412f-ac1a-da4f5077bc17"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.022216 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "1be569ff-0725-412f-ac1a-da4f5077bc17" (UID: "1be569ff-0725-412f-ac1a-da4f5077bc17"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.022249 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-node-log\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.022306 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-var-lib-openvswitch\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.022374 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-host-run-ovn-kubernetes\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.022433 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-systemd-units\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.022463 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-run-systemd\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.022485 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-ovnkube-config\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.022627 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-log-socket\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.022528 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-slash" (OuterVolumeSpecName: "host-slash") pod "1be569ff-0725-412f-ac1a-da4f5077bc17" (UID: "1be569ff-0725-412f-ac1a-da4f5077bc17"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.022684 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm7qk\" (UniqueName: \"kubernetes.io/projected/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-kube-api-access-bm7qk\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.022758 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-ovn-node-metrics-cert\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.022916 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-host-kubelet\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.022968 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-host-cni-netd\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.023020 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-host-run-netns\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.023047 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-host-cni-bin\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.023154 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-env-overrides\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.023190 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-run-ovn\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.023207 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.023231 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-host-slash\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.023245 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-run-openvswitch\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.023261 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-etc-openvswitch\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.023279 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-ovnkube-script-lib\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.023324 4828 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1be569ff-0725-412f-ac1a-da4f5077bc17-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.023334 4828 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1be569ff-0725-412f-ac1a-da4f5077bc17-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.023343 4828 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.023351 4828 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-log-socket\") on node \"crc\" DevicePath \"\"" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.023361 4828 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.023368 4828 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.023377 4828 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.023447 4828 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.023467 4828 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.023478 4828 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.023492 4828 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-slash\") on node \"crc\" DevicePath \"\"" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.023502 4828 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-node-log\") on node \"crc\" DevicePath \"\"" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.023514 4828 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.023525 4828 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.023536 4828 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.023548 4828 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1be569ff-0725-412f-ac1a-da4f5077bc17-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.023572 4828 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.027471 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1be569ff-0725-412f-ac1a-da4f5077bc17-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "1be569ff-0725-412f-ac1a-da4f5077bc17" (UID: "1be569ff-0725-412f-ac1a-da4f5077bc17"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.028046 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1be569ff-0725-412f-ac1a-da4f5077bc17-kube-api-access-v24dk" (OuterVolumeSpecName: "kube-api-access-v24dk") pod "1be569ff-0725-412f-ac1a-da4f5077bc17" (UID: "1be569ff-0725-412f-ac1a-da4f5077bc17"). InnerVolumeSpecName "kube-api-access-v24dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.028089 4828 scope.go:117] "RemoveContainer" containerID="92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.036151 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "1be569ff-0725-412f-ac1a-da4f5077bc17" (UID: "1be569ff-0725-412f-ac1a-da4f5077bc17"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.043292 4828 scope.go:117] "RemoveContainer" containerID="6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.057070 4828 scope.go:117] "RemoveContainer" containerID="66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.070863 4828 scope.go:117] "RemoveContainer" containerID="46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.085199 4828 scope.go:117] "RemoveContainer" containerID="6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.100520 4828 scope.go:117] "RemoveContainer" containerID="aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.115189 4828 scope.go:117] "RemoveContainer" containerID="6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.126672 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-ovn-node-metrics-cert\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.126714 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-host-kubelet\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.126745 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-host-cni-netd\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.126769 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-host-run-netns\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.126795 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-host-cni-bin\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.126837 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-env-overrides\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.126843 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-host-kubelet\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.126865 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-run-ovn\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.126874 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-host-run-netns\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.126891 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.126923 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-host-slash\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.126919 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-host-cni-netd\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.126970 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-run-openvswitch\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.126944 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-run-openvswitch\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.127015 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-etc-openvswitch\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.127015 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.127038 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-ovnkube-script-lib\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.127064 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-node-log\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.127080 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-run-ovn\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.127091 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-var-lib-openvswitch\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.127115 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-var-lib-openvswitch\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.127149 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-host-slash\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.127179 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-etc-openvswitch\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.127174 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-host-run-ovn-kubernetes\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.127249 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-systemd-units\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.127291 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-run-systemd\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.127324 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-ovnkube-config\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.127362 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-env-overrides\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.127369 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm7qk\" (UniqueName: \"kubernetes.io/projected/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-kube-api-access-bm7qk\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.127401 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-host-cni-bin\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.127415 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-log-socket\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.127425 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-node-log\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.127448 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-run-systemd\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.127532 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-host-run-ovn-kubernetes\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.127562 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-systemd-units\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.127731 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-ovnkube-script-lib\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.128022 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-log-socket\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.128057 4828 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1be569ff-0725-412f-ac1a-da4f5077bc17-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.128079 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v24dk\" (UniqueName: \"kubernetes.io/projected/1be569ff-0725-412f-ac1a-da4f5077bc17-kube-api-access-v24dk\") on node \"crc\" DevicePath \"\"" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.128102 4828 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1be569ff-0725-412f-ac1a-da4f5077bc17-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.128347 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-ovnkube-config\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.130061 4828 scope.go:117] "RemoveContainer" containerID="e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.130934 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-ovn-node-metrics-cert\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.143434 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm7qk\" (UniqueName: \"kubernetes.io/projected/427dd06e-a976-4fb5-9bc5-b66f7bba2c51-kube-api-access-bm7qk\") pod \"ovnkube-node-2p9rv\" (UID: \"427dd06e-a976-4fb5-9bc5-b66f7bba2c51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.147170 4828 scope.go:117] "RemoveContainer" containerID="425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f" Dec 05 19:15:48 crc kubenswrapper[4828]: E1205 19:15:48.147594 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f\": container with ID starting with 425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f not found: ID does not exist" containerID="425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.147644 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f"} err="failed to get container status \"425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f\": rpc error: code = NotFound desc = could not find container \"425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f\": container with ID starting with 425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.147680 4828 scope.go:117] "RemoveContainer" containerID="b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd" Dec 05 19:15:48 crc kubenswrapper[4828]: E1205 19:15:48.148152 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd\": container with ID starting with b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd not found: ID does not exist" containerID="b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.148192 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd"} err="failed to get container status \"b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd\": rpc error: code = NotFound desc = could not find container \"b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd\": container with ID starting with b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.148212 4828 scope.go:117] "RemoveContainer" containerID="92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704" Dec 05 19:15:48 crc kubenswrapper[4828]: E1205 19:15:48.148508 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\": container with ID starting with 92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704 not found: ID does not exist" containerID="92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.148527 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704"} err="failed to get container status \"92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\": rpc error: code = NotFound desc = could not find container \"92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\": container with ID starting with 92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.148540 4828 scope.go:117] "RemoveContainer" containerID="6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6" Dec 05 19:15:48 crc kubenswrapper[4828]: E1205 19:15:48.148859 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\": container with ID starting with 6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6 not found: ID does not exist" containerID="6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.148896 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6"} err="failed to get container status \"6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\": rpc error: code = NotFound desc = could not find container \"6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\": container with ID starting with 6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.148920 4828 scope.go:117] "RemoveContainer" containerID="66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08" Dec 05 19:15:48 crc kubenswrapper[4828]: E1205 19:15:48.149286 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\": container with ID starting with 66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08 not found: ID does not exist" containerID="66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.149331 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08"} err="failed to get container status \"66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\": rpc error: code = NotFound desc = could not find container \"66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\": container with ID starting with 66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.149370 4828 scope.go:117] "RemoveContainer" containerID="46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5" Dec 05 19:15:48 crc kubenswrapper[4828]: E1205 19:15:48.149916 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\": container with ID starting with 46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5 not found: ID does not exist" containerID="46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.149940 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5"} err="failed to get container status \"46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\": rpc error: code = NotFound desc = could not find container \"46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\": container with ID starting with 46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.149954 4828 scope.go:117] "RemoveContainer" containerID="6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894" Dec 05 19:15:48 crc kubenswrapper[4828]: E1205 19:15:48.150196 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\": container with ID starting with 6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894 not found: ID does not exist" containerID="6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.150230 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894"} err="failed to get container status \"6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\": rpc error: code = NotFound desc = could not find container \"6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\": container with ID starting with 6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.150252 4828 scope.go:117] "RemoveContainer" containerID="aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9" Dec 05 19:15:48 crc kubenswrapper[4828]: E1205 19:15:48.150586 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\": container with ID starting with aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9 not found: ID does not exist" containerID="aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.150787 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9"} err="failed to get container status \"aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\": rpc error: code = NotFound desc = could not find container \"aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\": container with ID starting with aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.150810 4828 scope.go:117] "RemoveContainer" containerID="6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424" Dec 05 19:15:48 crc kubenswrapper[4828]: E1205 19:15:48.151073 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\": container with ID starting with 6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424 not found: ID does not exist" containerID="6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.151096 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424"} err="failed to get container status \"6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\": rpc error: code = NotFound desc = could not find container \"6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\": container with ID starting with 6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.151112 4828 scope.go:117] "RemoveContainer" containerID="e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407" Dec 05 19:15:48 crc kubenswrapper[4828]: E1205 19:15:48.151381 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\": container with ID starting with e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407 not found: ID does not exist" containerID="e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.151446 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407"} err="failed to get container status \"e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\": rpc error: code = NotFound desc = could not find container \"e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\": container with ID starting with e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.151462 4828 scope.go:117] "RemoveContainer" containerID="425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.151719 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f"} err="failed to get container status \"425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f\": rpc error: code = NotFound desc = could not find container \"425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f\": container with ID starting with 425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.151748 4828 scope.go:117] "RemoveContainer" containerID="b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.152041 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd"} err="failed to get container status \"b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd\": rpc error: code = NotFound desc = could not find container \"b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd\": container with ID starting with b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.152074 4828 scope.go:117] "RemoveContainer" containerID="92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.152374 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704"} err="failed to get container status \"92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\": rpc error: code = NotFound desc = could not find container \"92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\": container with ID starting with 92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.152398 4828 scope.go:117] "RemoveContainer" containerID="6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.152693 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6"} err="failed to get container status \"6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\": rpc error: code = NotFound desc = could not find container \"6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\": container with ID starting with 6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.152737 4828 scope.go:117] "RemoveContainer" containerID="66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.153101 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08"} err="failed to get container status \"66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\": rpc error: code = NotFound desc = could not find container \"66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\": container with ID starting with 66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.153142 4828 scope.go:117] "RemoveContainer" containerID="46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.153504 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5"} err="failed to get container status \"46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\": rpc error: code = NotFound desc = could not find container \"46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\": container with ID starting with 46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.153565 4828 scope.go:117] "RemoveContainer" containerID="6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.153935 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894"} err="failed to get container status \"6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\": rpc error: code = NotFound desc = could not find container \"6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\": container with ID starting with 6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.153972 4828 scope.go:117] "RemoveContainer" containerID="aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.154281 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9"} err="failed to get container status \"aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\": rpc error: code = NotFound desc = could not find container \"aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\": container with ID starting with aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.154320 4828 scope.go:117] "RemoveContainer" containerID="6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.154673 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424"} err="failed to get container status \"6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\": rpc error: code = NotFound desc = could not find container \"6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\": container with ID starting with 6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.154721 4828 scope.go:117] "RemoveContainer" containerID="e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.155216 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407"} err="failed to get container status \"e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\": rpc error: code = NotFound desc = could not find container \"e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\": container with ID starting with e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.155249 4828 scope.go:117] "RemoveContainer" containerID="425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.155550 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f"} err="failed to get container status \"425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f\": rpc error: code = NotFound desc = could not find container \"425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f\": container with ID starting with 425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.155591 4828 scope.go:117] "RemoveContainer" containerID="b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.156075 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd"} err="failed to get container status \"b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd\": rpc error: code = NotFound desc = could not find container \"b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd\": container with ID starting with b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.156136 4828 scope.go:117] "RemoveContainer" containerID="92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.156553 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704"} err="failed to get container status \"92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\": rpc error: code = NotFound desc = could not find container \"92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\": container with ID starting with 92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.156590 4828 scope.go:117] "RemoveContainer" containerID="6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.156914 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6"} err="failed to get container status \"6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\": rpc error: code = NotFound desc = could not find container \"6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\": container with ID starting with 6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.156940 4828 scope.go:117] "RemoveContainer" containerID="66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.157313 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08"} err="failed to get container status \"66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\": rpc error: code = NotFound desc = could not find container \"66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\": container with ID starting with 66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.157383 4828 scope.go:117] "RemoveContainer" containerID="46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.157816 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5"} err="failed to get container status \"46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\": rpc error: code = NotFound desc = could not find container \"46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\": container with ID starting with 46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.157870 4828 scope.go:117] "RemoveContainer" containerID="6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.158176 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894"} err="failed to get container status \"6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\": rpc error: code = NotFound desc = could not find container \"6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\": container with ID starting with 6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.158213 4828 scope.go:117] "RemoveContainer" containerID="aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.158559 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9"} err="failed to get container status \"aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\": rpc error: code = NotFound desc = could not find container \"aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\": container with ID starting with aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.158590 4828 scope.go:117] "RemoveContainer" containerID="6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.158883 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424"} err="failed to get container status \"6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\": rpc error: code = NotFound desc = could not find container \"6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\": container with ID starting with 6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.158918 4828 scope.go:117] "RemoveContainer" containerID="e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.159263 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407"} err="failed to get container status \"e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\": rpc error: code = NotFound desc = could not find container \"e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\": container with ID starting with e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.159293 4828 scope.go:117] "RemoveContainer" containerID="425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.159550 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f"} err="failed to get container status \"425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f\": rpc error: code = NotFound desc = could not find container \"425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f\": container with ID starting with 425c8d935dd39d744ee9f099106300cc608046700cbe972e64a4958f28079d6f not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.159581 4828 scope.go:117] "RemoveContainer" containerID="b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.159872 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd"} err="failed to get container status \"b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd\": rpc error: code = NotFound desc = could not find container \"b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd\": container with ID starting with b6049d3385b5ee06007c8d6c858280974c2ff941a9d87574341be11e22f15bcd not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.159895 4828 scope.go:117] "RemoveContainer" containerID="92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.160165 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704"} err="failed to get container status \"92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\": rpc error: code = NotFound desc = could not find container \"92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704\": container with ID starting with 92ea90fb5ba511a8661d861406ebfe2ac5c3992768d887b6c9021e449635c704 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.160185 4828 scope.go:117] "RemoveContainer" containerID="6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.160452 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6"} err="failed to get container status \"6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\": rpc error: code = NotFound desc = could not find container \"6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6\": container with ID starting with 6eaae13fc00b37ad4007a65564ef2de5353c28d89acbfe1b2d92f0aaf0895cd6 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.160476 4828 scope.go:117] "RemoveContainer" containerID="66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.160767 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08"} err="failed to get container status \"66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\": rpc error: code = NotFound desc = could not find container \"66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08\": container with ID starting with 66541fa56508f211b0e5b12467325d816ab8bbeb3c694e08c49dbe5405eceb08 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.160789 4828 scope.go:117] "RemoveContainer" containerID="46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.161079 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5"} err="failed to get container status \"46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\": rpc error: code = NotFound desc = could not find container \"46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5\": container with ID starting with 46a00386311da3a72119a8f880d71b0fff4f72c5536a1d95816b1351ade037c5 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.161102 4828 scope.go:117] "RemoveContainer" containerID="6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.161412 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894"} err="failed to get container status \"6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\": rpc error: code = NotFound desc = could not find container \"6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894\": container with ID starting with 6a6b37e76233d0bb581c667ccea841b3f6022d8be20015592346eee9c7838894 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.161439 4828 scope.go:117] "RemoveContainer" containerID="aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.161815 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9"} err="failed to get container status \"aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\": rpc error: code = NotFound desc = could not find container \"aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9\": container with ID starting with aa15dd3e293f4d332b5f7f1ce7bf99ece8e801c81ec89d09075185abcfaeaca9 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.161881 4828 scope.go:117] "RemoveContainer" containerID="6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.162176 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424"} err="failed to get container status \"6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\": rpc error: code = NotFound desc = could not find container \"6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424\": container with ID starting with 6476fce3428abf458d061ba6ebc1aebb2c1561b2d66259a388ce26a0f9378424 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.162198 4828 scope.go:117] "RemoveContainer" containerID="e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.162516 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407"} err="failed to get container status \"e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\": rpc error: code = NotFound desc = could not find container \"e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407\": container with ID starting with e7fe796886c00df1270caffdb9d002981241631165eddc67a0ceb9be7806e407 not found: ID does not exist" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.232116 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.262094 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tzshq"] Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.265710 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tzshq"] Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.455347 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1be569ff-0725-412f-ac1a-da4f5077bc17" path="/var/lib/kubelet/pods/1be569ff-0725-412f-ac1a-da4f5077bc17/volumes" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.931741 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ksv4w_e927a669-7d9d-442a-b020-339804e95af2/kube-multus/2.log" Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.934057 4828 generic.go:334] "Generic (PLEG): container finished" podID="427dd06e-a976-4fb5-9bc5-b66f7bba2c51" containerID="9b260c1a1a0ea5de4424c468a5000931eea2bcfc22a798e7b6ce570f61600287" exitCode=0 Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.934098 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" event={"ID":"427dd06e-a976-4fb5-9bc5-b66f7bba2c51","Type":"ContainerDied","Data":"9b260c1a1a0ea5de4424c468a5000931eea2bcfc22a798e7b6ce570f61600287"} Dec 05 19:15:48 crc kubenswrapper[4828]: I1205 19:15:48.934151 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" event={"ID":"427dd06e-a976-4fb5-9bc5-b66f7bba2c51","Type":"ContainerStarted","Data":"d1beaad459d326df466d4e6e4f0f0b1cd086bf930a504f77a4d99c459002d67d"} Dec 05 19:15:49 crc kubenswrapper[4828]: I1205 19:15:49.946389 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" event={"ID":"427dd06e-a976-4fb5-9bc5-b66f7bba2c51","Type":"ContainerStarted","Data":"f077f4a03b6e86a6c83c6834bd419cf38f80c0b5ee7e3c6fc7a8d2782fcb38f4"} Dec 05 19:15:49 crc kubenswrapper[4828]: I1205 19:15:49.946777 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" event={"ID":"427dd06e-a976-4fb5-9bc5-b66f7bba2c51","Type":"ContainerStarted","Data":"0c9b54023c74971975a66151abe5617b86934d5145e5b263de3ac5d3272f9f67"} Dec 05 19:15:49 crc kubenswrapper[4828]: I1205 19:15:49.946786 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" event={"ID":"427dd06e-a976-4fb5-9bc5-b66f7bba2c51","Type":"ContainerStarted","Data":"62ca3ad2a21f6bf5ac820560632c7a57dd8385bd0d6a623495f9844d8830bc87"} Dec 05 19:15:49 crc kubenswrapper[4828]: I1205 19:15:49.946794 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" event={"ID":"427dd06e-a976-4fb5-9bc5-b66f7bba2c51","Type":"ContainerStarted","Data":"03b11bc26956c5bcdb369984dedb1cf9850fde098f2e90c0ccc4be0e92c86466"} Dec 05 19:15:49 crc kubenswrapper[4828]: I1205 19:15:49.946801 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" event={"ID":"427dd06e-a976-4fb5-9bc5-b66f7bba2c51","Type":"ContainerStarted","Data":"f0b642bc577cabfb438c9b1cd92414edb1acdabb64635114796c262d54a11f16"} Dec 05 19:15:49 crc kubenswrapper[4828]: I1205 19:15:49.946810 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" event={"ID":"427dd06e-a976-4fb5-9bc5-b66f7bba2c51","Type":"ContainerStarted","Data":"a63e71e13d69651f9d08a80f628e1c8d7717ef4895fbc2989f0bddfd47930c13"} Dec 05 19:15:52 crc kubenswrapper[4828]: I1205 19:15:52.971288 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" event={"ID":"427dd06e-a976-4fb5-9bc5-b66f7bba2c51","Type":"ContainerStarted","Data":"7daf14c9bd6ba46113f6a29d8c8f4f3835762d7d69b9ac4d0bacf1b501b75956"} Dec 05 19:15:56 crc kubenswrapper[4828]: I1205 19:15:56.008980 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" event={"ID":"427dd06e-a976-4fb5-9bc5-b66f7bba2c51","Type":"ContainerStarted","Data":"d349538414930f57ca298f7c61e9f0a0cf206dfd58ebb1083fbeaa45ae7ddae8"} Dec 05 19:15:56 crc kubenswrapper[4828]: I1205 19:15:56.009429 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:56 crc kubenswrapper[4828]: I1205 19:15:56.009442 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:56 crc kubenswrapper[4828]: I1205 19:15:56.009452 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:56 crc kubenswrapper[4828]: I1205 19:15:56.033347 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:56 crc kubenswrapper[4828]: I1205 19:15:56.036412 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" podStartSLOduration=9.036396254 podStartE2EDuration="9.036396254s" podCreationTimestamp="2025-12-05 19:15:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:15:56.035342645 +0000 UTC m=+733.930564941" watchObservedRunningTime="2025-12-05 19:15:56.036396254 +0000 UTC m=+733.931618560" Dec 05 19:15:56 crc kubenswrapper[4828]: I1205 19:15:56.046905 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:15:59 crc kubenswrapper[4828]: I1205 19:15:59.447212 4828 scope.go:117] "RemoveContainer" containerID="f0c1e0c0274d4cf63dbe8ececdf93484842b90a7184f096364b27673f0f76250" Dec 05 19:16:01 crc kubenswrapper[4828]: I1205 19:16:01.043683 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ksv4w_e927a669-7d9d-442a-b020-339804e95af2/kube-multus/2.log" Dec 05 19:16:01 crc kubenswrapper[4828]: I1205 19:16:01.045391 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ksv4w" event={"ID":"e927a669-7d9d-442a-b020-339804e95af2","Type":"ContainerStarted","Data":"0c75f4980108995430fa531e159db421312d13784413f4fe00ebaa0c8faa5e9d"} Dec 05 19:16:05 crc kubenswrapper[4828]: I1205 19:16:05.259576 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:16:05 crc kubenswrapper[4828]: I1205 19:16:05.259932 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:16:18 crc kubenswrapper[4828]: I1205 19:16:18.261034 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2p9rv" Dec 05 19:16:24 crc kubenswrapper[4828]: I1205 19:16:24.433049 4828 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 05 19:16:27 crc kubenswrapper[4828]: I1205 19:16:27.426313 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj"] Dec 05 19:16:27 crc kubenswrapper[4828]: I1205 19:16:27.428973 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj" Dec 05 19:16:27 crc kubenswrapper[4828]: I1205 19:16:27.440435 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 05 19:16:27 crc kubenswrapper[4828]: I1205 19:16:27.456456 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj"] Dec 05 19:16:27 crc kubenswrapper[4828]: I1205 19:16:27.535878 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/800451e0-a385-4d99-ab2d-706b98d39f8d-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj\" (UID: \"800451e0-a385-4d99-ab2d-706b98d39f8d\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj" Dec 05 19:16:27 crc kubenswrapper[4828]: I1205 19:16:27.535970 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/800451e0-a385-4d99-ab2d-706b98d39f8d-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj\" (UID: \"800451e0-a385-4d99-ab2d-706b98d39f8d\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj" Dec 05 19:16:27 crc kubenswrapper[4828]: I1205 19:16:27.536089 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqkxv\" (UniqueName: \"kubernetes.io/projected/800451e0-a385-4d99-ab2d-706b98d39f8d-kube-api-access-qqkxv\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj\" (UID: \"800451e0-a385-4d99-ab2d-706b98d39f8d\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj" Dec 05 19:16:27 crc kubenswrapper[4828]: I1205 19:16:27.637515 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/800451e0-a385-4d99-ab2d-706b98d39f8d-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj\" (UID: \"800451e0-a385-4d99-ab2d-706b98d39f8d\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj" Dec 05 19:16:27 crc kubenswrapper[4828]: I1205 19:16:27.637601 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/800451e0-a385-4d99-ab2d-706b98d39f8d-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj\" (UID: \"800451e0-a385-4d99-ab2d-706b98d39f8d\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj" Dec 05 19:16:27 crc kubenswrapper[4828]: I1205 19:16:27.637651 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqkxv\" (UniqueName: \"kubernetes.io/projected/800451e0-a385-4d99-ab2d-706b98d39f8d-kube-api-access-qqkxv\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj\" (UID: \"800451e0-a385-4d99-ab2d-706b98d39f8d\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj" Dec 05 19:16:27 crc kubenswrapper[4828]: I1205 19:16:27.638046 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/800451e0-a385-4d99-ab2d-706b98d39f8d-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj\" (UID: \"800451e0-a385-4d99-ab2d-706b98d39f8d\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj" Dec 05 19:16:27 crc kubenswrapper[4828]: I1205 19:16:27.638223 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/800451e0-a385-4d99-ab2d-706b98d39f8d-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj\" (UID: \"800451e0-a385-4d99-ab2d-706b98d39f8d\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj" Dec 05 19:16:27 crc kubenswrapper[4828]: I1205 19:16:27.656931 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqkxv\" (UniqueName: \"kubernetes.io/projected/800451e0-a385-4d99-ab2d-706b98d39f8d-kube-api-access-qqkxv\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj\" (UID: \"800451e0-a385-4d99-ab2d-706b98d39f8d\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj" Dec 05 19:16:27 crc kubenswrapper[4828]: I1205 19:16:27.761855 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj" Dec 05 19:16:27 crc kubenswrapper[4828]: I1205 19:16:27.977175 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj"] Dec 05 19:16:28 crc kubenswrapper[4828]: I1205 19:16:28.214945 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj" event={"ID":"800451e0-a385-4d99-ab2d-706b98d39f8d","Type":"ContainerStarted","Data":"ba79b3246cbb6a6622ff9fbcadc196d358610e7a17fc1587f3c89201094258b1"} Dec 05 19:16:28 crc kubenswrapper[4828]: I1205 19:16:28.214988 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj" event={"ID":"800451e0-a385-4d99-ab2d-706b98d39f8d","Type":"ContainerStarted","Data":"9d7f8262b9cfeeadb2fe074e83c8d0d271c19fb57b3cb42a98244ddc2b9a85c4"} Dec 05 19:16:29 crc kubenswrapper[4828]: I1205 19:16:29.224040 4828 generic.go:334] "Generic (PLEG): container finished" podID="800451e0-a385-4d99-ab2d-706b98d39f8d" containerID="ba79b3246cbb6a6622ff9fbcadc196d358610e7a17fc1587f3c89201094258b1" exitCode=0 Dec 05 19:16:29 crc kubenswrapper[4828]: I1205 19:16:29.224084 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj" event={"ID":"800451e0-a385-4d99-ab2d-706b98d39f8d","Type":"ContainerDied","Data":"ba79b3246cbb6a6622ff9fbcadc196d358610e7a17fc1587f3c89201094258b1"} Dec 05 19:16:29 crc kubenswrapper[4828]: I1205 19:16:29.599037 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p5pxx"] Dec 05 19:16:29 crc kubenswrapper[4828]: I1205 19:16:29.600536 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p5pxx" Dec 05 19:16:29 crc kubenswrapper[4828]: I1205 19:16:29.618087 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p5pxx"] Dec 05 19:16:29 crc kubenswrapper[4828]: I1205 19:16:29.664053 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fds2\" (UniqueName: \"kubernetes.io/projected/f55f684a-dad3-467c-a32d-544c0c2439b4-kube-api-access-5fds2\") pod \"redhat-operators-p5pxx\" (UID: \"f55f684a-dad3-467c-a32d-544c0c2439b4\") " pod="openshift-marketplace/redhat-operators-p5pxx" Dec 05 19:16:29 crc kubenswrapper[4828]: I1205 19:16:29.664116 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f55f684a-dad3-467c-a32d-544c0c2439b4-catalog-content\") pod \"redhat-operators-p5pxx\" (UID: \"f55f684a-dad3-467c-a32d-544c0c2439b4\") " pod="openshift-marketplace/redhat-operators-p5pxx" Dec 05 19:16:29 crc kubenswrapper[4828]: I1205 19:16:29.664202 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f55f684a-dad3-467c-a32d-544c0c2439b4-utilities\") pod \"redhat-operators-p5pxx\" (UID: \"f55f684a-dad3-467c-a32d-544c0c2439b4\") " pod="openshift-marketplace/redhat-operators-p5pxx" Dec 05 19:16:29 crc kubenswrapper[4828]: I1205 19:16:29.765799 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f55f684a-dad3-467c-a32d-544c0c2439b4-utilities\") pod \"redhat-operators-p5pxx\" (UID: \"f55f684a-dad3-467c-a32d-544c0c2439b4\") " pod="openshift-marketplace/redhat-operators-p5pxx" Dec 05 19:16:29 crc kubenswrapper[4828]: I1205 19:16:29.765901 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fds2\" (UniqueName: \"kubernetes.io/projected/f55f684a-dad3-467c-a32d-544c0c2439b4-kube-api-access-5fds2\") pod \"redhat-operators-p5pxx\" (UID: \"f55f684a-dad3-467c-a32d-544c0c2439b4\") " pod="openshift-marketplace/redhat-operators-p5pxx" Dec 05 19:16:29 crc kubenswrapper[4828]: I1205 19:16:29.765933 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f55f684a-dad3-467c-a32d-544c0c2439b4-catalog-content\") pod \"redhat-operators-p5pxx\" (UID: \"f55f684a-dad3-467c-a32d-544c0c2439b4\") " pod="openshift-marketplace/redhat-operators-p5pxx" Dec 05 19:16:29 crc kubenswrapper[4828]: I1205 19:16:29.766308 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f55f684a-dad3-467c-a32d-544c0c2439b4-utilities\") pod \"redhat-operators-p5pxx\" (UID: \"f55f684a-dad3-467c-a32d-544c0c2439b4\") " pod="openshift-marketplace/redhat-operators-p5pxx" Dec 05 19:16:29 crc kubenswrapper[4828]: I1205 19:16:29.766383 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f55f684a-dad3-467c-a32d-544c0c2439b4-catalog-content\") pod \"redhat-operators-p5pxx\" (UID: \"f55f684a-dad3-467c-a32d-544c0c2439b4\") " pod="openshift-marketplace/redhat-operators-p5pxx" Dec 05 19:16:29 crc kubenswrapper[4828]: I1205 19:16:29.804594 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fds2\" (UniqueName: \"kubernetes.io/projected/f55f684a-dad3-467c-a32d-544c0c2439b4-kube-api-access-5fds2\") pod \"redhat-operators-p5pxx\" (UID: \"f55f684a-dad3-467c-a32d-544c0c2439b4\") " pod="openshift-marketplace/redhat-operators-p5pxx" Dec 05 19:16:29 crc kubenswrapper[4828]: I1205 19:16:29.926683 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p5pxx" Dec 05 19:16:30 crc kubenswrapper[4828]: I1205 19:16:30.280336 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p5pxx"] Dec 05 19:16:31 crc kubenswrapper[4828]: I1205 19:16:31.241024 4828 generic.go:334] "Generic (PLEG): container finished" podID="800451e0-a385-4d99-ab2d-706b98d39f8d" containerID="0b78a2e84aae77e63cebc70e20446650e1df9bac4f444cc93b8fda4a910123c4" exitCode=0 Dec 05 19:16:31 crc kubenswrapper[4828]: I1205 19:16:31.241096 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj" event={"ID":"800451e0-a385-4d99-ab2d-706b98d39f8d","Type":"ContainerDied","Data":"0b78a2e84aae77e63cebc70e20446650e1df9bac4f444cc93b8fda4a910123c4"} Dec 05 19:16:31 crc kubenswrapper[4828]: I1205 19:16:31.244881 4828 generic.go:334] "Generic (PLEG): container finished" podID="f55f684a-dad3-467c-a32d-544c0c2439b4" containerID="33b12ebe7182334519e40fc9a40a82e7b4c163b223e638151b4d1d6427b1a0c1" exitCode=0 Dec 05 19:16:31 crc kubenswrapper[4828]: I1205 19:16:31.244954 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p5pxx" event={"ID":"f55f684a-dad3-467c-a32d-544c0c2439b4","Type":"ContainerDied","Data":"33b12ebe7182334519e40fc9a40a82e7b4c163b223e638151b4d1d6427b1a0c1"} Dec 05 19:16:31 crc kubenswrapper[4828]: I1205 19:16:31.244981 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p5pxx" event={"ID":"f55f684a-dad3-467c-a32d-544c0c2439b4","Type":"ContainerStarted","Data":"8e779d2dfcf082d32a459052dcd5aa1168b46ab9b05c2d7ab44f7efadee7b1a3"} Dec 05 19:16:32 crc kubenswrapper[4828]: I1205 19:16:32.253700 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p5pxx" event={"ID":"f55f684a-dad3-467c-a32d-544c0c2439b4","Type":"ContainerStarted","Data":"a4a477b8074b9b53bdc28f997dc8e278695759415c28adebdcee8f3bea983ee5"} Dec 05 19:16:32 crc kubenswrapper[4828]: I1205 19:16:32.256256 4828 generic.go:334] "Generic (PLEG): container finished" podID="800451e0-a385-4d99-ab2d-706b98d39f8d" containerID="7aead80a2c6427078f01e8beedee2401a11a5e20ae0d44be7716fd2c51ec99cc" exitCode=0 Dec 05 19:16:32 crc kubenswrapper[4828]: I1205 19:16:32.256287 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj" event={"ID":"800451e0-a385-4d99-ab2d-706b98d39f8d","Type":"ContainerDied","Data":"7aead80a2c6427078f01e8beedee2401a11a5e20ae0d44be7716fd2c51ec99cc"} Dec 05 19:16:33 crc kubenswrapper[4828]: I1205 19:16:33.264226 4828 generic.go:334] "Generic (PLEG): container finished" podID="f55f684a-dad3-467c-a32d-544c0c2439b4" containerID="a4a477b8074b9b53bdc28f997dc8e278695759415c28adebdcee8f3bea983ee5" exitCode=0 Dec 05 19:16:33 crc kubenswrapper[4828]: I1205 19:16:33.264361 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p5pxx" event={"ID":"f55f684a-dad3-467c-a32d-544c0c2439b4","Type":"ContainerDied","Data":"a4a477b8074b9b53bdc28f997dc8e278695759415c28adebdcee8f3bea983ee5"} Dec 05 19:16:33 crc kubenswrapper[4828]: I1205 19:16:33.711739 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj" Dec 05 19:16:33 crc kubenswrapper[4828]: I1205 19:16:33.724087 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqkxv\" (UniqueName: \"kubernetes.io/projected/800451e0-a385-4d99-ab2d-706b98d39f8d-kube-api-access-qqkxv\") pod \"800451e0-a385-4d99-ab2d-706b98d39f8d\" (UID: \"800451e0-a385-4d99-ab2d-706b98d39f8d\") " Dec 05 19:16:33 crc kubenswrapper[4828]: I1205 19:16:33.724239 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/800451e0-a385-4d99-ab2d-706b98d39f8d-util\") pod \"800451e0-a385-4d99-ab2d-706b98d39f8d\" (UID: \"800451e0-a385-4d99-ab2d-706b98d39f8d\") " Dec 05 19:16:33 crc kubenswrapper[4828]: I1205 19:16:33.726807 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/800451e0-a385-4d99-ab2d-706b98d39f8d-bundle\") pod \"800451e0-a385-4d99-ab2d-706b98d39f8d\" (UID: \"800451e0-a385-4d99-ab2d-706b98d39f8d\") " Dec 05 19:16:33 crc kubenswrapper[4828]: I1205 19:16:33.729013 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/800451e0-a385-4d99-ab2d-706b98d39f8d-bundle" (OuterVolumeSpecName: "bundle") pod "800451e0-a385-4d99-ab2d-706b98d39f8d" (UID: "800451e0-a385-4d99-ab2d-706b98d39f8d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:16:33 crc kubenswrapper[4828]: I1205 19:16:33.731801 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/800451e0-a385-4d99-ab2d-706b98d39f8d-kube-api-access-qqkxv" (OuterVolumeSpecName: "kube-api-access-qqkxv") pod "800451e0-a385-4d99-ab2d-706b98d39f8d" (UID: "800451e0-a385-4d99-ab2d-706b98d39f8d"). InnerVolumeSpecName "kube-api-access-qqkxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:16:33 crc kubenswrapper[4828]: I1205 19:16:33.829314 4828 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/800451e0-a385-4d99-ab2d-706b98d39f8d-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:16:33 crc kubenswrapper[4828]: I1205 19:16:33.829375 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqkxv\" (UniqueName: \"kubernetes.io/projected/800451e0-a385-4d99-ab2d-706b98d39f8d-kube-api-access-qqkxv\") on node \"crc\" DevicePath \"\"" Dec 05 19:16:34 crc kubenswrapper[4828]: I1205 19:16:34.274568 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj" event={"ID":"800451e0-a385-4d99-ab2d-706b98d39f8d","Type":"ContainerDied","Data":"9d7f8262b9cfeeadb2fe074e83c8d0d271c19fb57b3cb42a98244ddc2b9a85c4"} Dec 05 19:16:34 crc kubenswrapper[4828]: I1205 19:16:34.274599 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d7f8262b9cfeeadb2fe074e83c8d0d271c19fb57b3cb42a98244ddc2b9a85c4" Dec 05 19:16:34 crc kubenswrapper[4828]: I1205 19:16:34.274702 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj" Dec 05 19:16:34 crc kubenswrapper[4828]: I1205 19:16:34.386057 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/800451e0-a385-4d99-ab2d-706b98d39f8d-util" (OuterVolumeSpecName: "util") pod "800451e0-a385-4d99-ab2d-706b98d39f8d" (UID: "800451e0-a385-4d99-ab2d-706b98d39f8d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:16:34 crc kubenswrapper[4828]: I1205 19:16:34.437196 4828 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/800451e0-a385-4d99-ab2d-706b98d39f8d-util\") on node \"crc\" DevicePath \"\"" Dec 05 19:16:35 crc kubenswrapper[4828]: I1205 19:16:35.259624 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:16:35 crc kubenswrapper[4828]: I1205 19:16:35.259716 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:16:35 crc kubenswrapper[4828]: I1205 19:16:35.283506 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p5pxx" event={"ID":"f55f684a-dad3-467c-a32d-544c0c2439b4","Type":"ContainerStarted","Data":"a398787b0a26fe19af2503c25bc374336ec360896be427712cce55633539dcdc"} Dec 05 19:16:35 crc kubenswrapper[4828]: I1205 19:16:35.315330 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p5pxx" podStartSLOduration=3.148445707 podStartE2EDuration="6.31529982s" podCreationTimestamp="2025-12-05 19:16:29 +0000 UTC" firstStartedPulling="2025-12-05 19:16:31.246242061 +0000 UTC m=+769.141464367" lastFinishedPulling="2025-12-05 19:16:34.413096134 +0000 UTC m=+772.308318480" observedRunningTime="2025-12-05 19:16:35.307505449 +0000 UTC m=+773.202727795" watchObservedRunningTime="2025-12-05 19:16:35.31529982 +0000 UTC m=+773.210522166" Dec 05 19:16:38 crc kubenswrapper[4828]: I1205 19:16:38.757165 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-2zdc5"] Dec 05 19:16:38 crc kubenswrapper[4828]: E1205 19:16:38.757381 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="800451e0-a385-4d99-ab2d-706b98d39f8d" containerName="extract" Dec 05 19:16:38 crc kubenswrapper[4828]: I1205 19:16:38.757394 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="800451e0-a385-4d99-ab2d-706b98d39f8d" containerName="extract" Dec 05 19:16:38 crc kubenswrapper[4828]: E1205 19:16:38.757408 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="800451e0-a385-4d99-ab2d-706b98d39f8d" containerName="util" Dec 05 19:16:38 crc kubenswrapper[4828]: I1205 19:16:38.757416 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="800451e0-a385-4d99-ab2d-706b98d39f8d" containerName="util" Dec 05 19:16:38 crc kubenswrapper[4828]: E1205 19:16:38.757431 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="800451e0-a385-4d99-ab2d-706b98d39f8d" containerName="pull" Dec 05 19:16:38 crc kubenswrapper[4828]: I1205 19:16:38.757437 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="800451e0-a385-4d99-ab2d-706b98d39f8d" containerName="pull" Dec 05 19:16:38 crc kubenswrapper[4828]: I1205 19:16:38.757524 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="800451e0-a385-4d99-ab2d-706b98d39f8d" containerName="extract" Dec 05 19:16:38 crc kubenswrapper[4828]: I1205 19:16:38.757882 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2zdc5" Dec 05 19:16:38 crc kubenswrapper[4828]: I1205 19:16:38.760977 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-c4p5m" Dec 05 19:16:38 crc kubenswrapper[4828]: I1205 19:16:38.761119 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 05 19:16:38 crc kubenswrapper[4828]: I1205 19:16:38.762209 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 05 19:16:38 crc kubenswrapper[4828]: I1205 19:16:38.772733 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-2zdc5"] Dec 05 19:16:38 crc kubenswrapper[4828]: I1205 19:16:38.832425 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dljhg\" (UniqueName: \"kubernetes.io/projected/1691d52d-868b-4121-8863-2a59db739b1b-kube-api-access-dljhg\") pod \"nmstate-operator-5b5b58f5c8-2zdc5\" (UID: \"1691d52d-868b-4121-8863-2a59db739b1b\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2zdc5" Dec 05 19:16:38 crc kubenswrapper[4828]: I1205 19:16:38.933534 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dljhg\" (UniqueName: \"kubernetes.io/projected/1691d52d-868b-4121-8863-2a59db739b1b-kube-api-access-dljhg\") pod \"nmstate-operator-5b5b58f5c8-2zdc5\" (UID: \"1691d52d-868b-4121-8863-2a59db739b1b\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2zdc5" Dec 05 19:16:38 crc kubenswrapper[4828]: I1205 19:16:38.957198 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dljhg\" (UniqueName: \"kubernetes.io/projected/1691d52d-868b-4121-8863-2a59db739b1b-kube-api-access-dljhg\") pod \"nmstate-operator-5b5b58f5c8-2zdc5\" (UID: \"1691d52d-868b-4121-8863-2a59db739b1b\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2zdc5" Dec 05 19:16:39 crc kubenswrapper[4828]: I1205 19:16:39.075434 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2zdc5" Dec 05 19:16:39 crc kubenswrapper[4828]: I1205 19:16:39.265223 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-2zdc5"] Dec 05 19:16:39 crc kubenswrapper[4828]: I1205 19:16:39.317945 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2zdc5" event={"ID":"1691d52d-868b-4121-8863-2a59db739b1b","Type":"ContainerStarted","Data":"3caf4b5db35a9853e1a8698296fab37610ab2d201aa3737439216a70328f012c"} Dec 05 19:16:39 crc kubenswrapper[4828]: I1205 19:16:39.926855 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p5pxx" Dec 05 19:16:39 crc kubenswrapper[4828]: I1205 19:16:39.926933 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p5pxx" Dec 05 19:16:40 crc kubenswrapper[4828]: I1205 19:16:40.964400 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p5pxx" podUID="f55f684a-dad3-467c-a32d-544c0c2439b4" containerName="registry-server" probeResult="failure" output=< Dec 05 19:16:40 crc kubenswrapper[4828]: timeout: failed to connect service ":50051" within 1s Dec 05 19:16:40 crc kubenswrapper[4828]: > Dec 05 19:16:42 crc kubenswrapper[4828]: I1205 19:16:42.343516 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2zdc5" event={"ID":"1691d52d-868b-4121-8863-2a59db739b1b","Type":"ContainerStarted","Data":"77c60c8d50324eb31df9895d029e2d220715e35fba6404e88b0119f767879193"} Dec 05 19:16:42 crc kubenswrapper[4828]: I1205 19:16:42.362626 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2zdc5" podStartSLOduration=1.562611599 podStartE2EDuration="4.362607529s" podCreationTimestamp="2025-12-05 19:16:38 +0000 UTC" firstStartedPulling="2025-12-05 19:16:39.280083677 +0000 UTC m=+777.175305983" lastFinishedPulling="2025-12-05 19:16:42.080079607 +0000 UTC m=+779.975301913" observedRunningTime="2025-12-05 19:16:42.361366135 +0000 UTC m=+780.256588441" watchObservedRunningTime="2025-12-05 19:16:42.362607529 +0000 UTC m=+780.257829835" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.733284 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-jzb4h"] Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.735805 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jzb4h" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.738319 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-j58sv" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.749798 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-jzb4h"] Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.765992 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j5ps8"] Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.766959 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j5ps8" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.771230 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-lmln5"] Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.772051 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lmln5" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.772342 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.794571 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j5ps8"] Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.845049 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c535354b-ac85-4a30-9f7d-1547f2db8fbc-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-j5ps8\" (UID: \"c535354b-ac85-4a30-9f7d-1547f2db8fbc\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j5ps8" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.845120 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/7bd0a6fb-88d4-4e0a-9ac1-d9334b2f91b8-nmstate-lock\") pod \"nmstate-handler-lmln5\" (UID: \"7bd0a6fb-88d4-4e0a-9ac1-d9334b2f91b8\") " pod="openshift-nmstate/nmstate-handler-lmln5" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.845175 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/7bd0a6fb-88d4-4e0a-9ac1-d9334b2f91b8-dbus-socket\") pod \"nmstate-handler-lmln5\" (UID: \"7bd0a6fb-88d4-4e0a-9ac1-d9334b2f91b8\") " pod="openshift-nmstate/nmstate-handler-lmln5" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.845212 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrw5s\" (UniqueName: \"kubernetes.io/projected/5e07f179-6cb8-4771-894c-7ad6c2ee6b10-kube-api-access-rrw5s\") pod \"nmstate-metrics-7f946cbc9-jzb4h\" (UID: \"5e07f179-6cb8-4771-894c-7ad6c2ee6b10\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jzb4h" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.845232 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9j2d\" (UniqueName: \"kubernetes.io/projected/c535354b-ac85-4a30-9f7d-1547f2db8fbc-kube-api-access-p9j2d\") pod \"nmstate-webhook-5f6d4c5ccb-j5ps8\" (UID: \"c535354b-ac85-4a30-9f7d-1547f2db8fbc\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j5ps8" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.845262 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vxs6\" (UniqueName: \"kubernetes.io/projected/7bd0a6fb-88d4-4e0a-9ac1-d9334b2f91b8-kube-api-access-8vxs6\") pod \"nmstate-handler-lmln5\" (UID: \"7bd0a6fb-88d4-4e0a-9ac1-d9334b2f91b8\") " pod="openshift-nmstate/nmstate-handler-lmln5" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.845288 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/7bd0a6fb-88d4-4e0a-9ac1-d9334b2f91b8-ovs-socket\") pod \"nmstate-handler-lmln5\" (UID: \"7bd0a6fb-88d4-4e0a-9ac1-d9334b2f91b8\") " pod="openshift-nmstate/nmstate-handler-lmln5" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.887652 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hbrt4"] Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.891366 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hbrt4" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.893661 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-knrdg" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.893682 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.893721 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.896637 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hbrt4"] Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.946793 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/7bd0a6fb-88d4-4e0a-9ac1-d9334b2f91b8-nmstate-lock\") pod \"nmstate-handler-lmln5\" (UID: \"7bd0a6fb-88d4-4e0a-9ac1-d9334b2f91b8\") " pod="openshift-nmstate/nmstate-handler-lmln5" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.946875 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/7bd0a6fb-88d4-4e0a-9ac1-d9334b2f91b8-dbus-socket\") pod \"nmstate-handler-lmln5\" (UID: \"7bd0a6fb-88d4-4e0a-9ac1-d9334b2f91b8\") " pod="openshift-nmstate/nmstate-handler-lmln5" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.946902 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/96df2436-0a55-4b21-900b-dfedbafa290d-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-hbrt4\" (UID: \"96df2436-0a55-4b21-900b-dfedbafa290d\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hbrt4" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.946942 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrw5s\" (UniqueName: \"kubernetes.io/projected/5e07f179-6cb8-4771-894c-7ad6c2ee6b10-kube-api-access-rrw5s\") pod \"nmstate-metrics-7f946cbc9-jzb4h\" (UID: \"5e07f179-6cb8-4771-894c-7ad6c2ee6b10\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jzb4h" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.946995 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9j2d\" (UniqueName: \"kubernetes.io/projected/c535354b-ac85-4a30-9f7d-1547f2db8fbc-kube-api-access-p9j2d\") pod \"nmstate-webhook-5f6d4c5ccb-j5ps8\" (UID: \"c535354b-ac85-4a30-9f7d-1547f2db8fbc\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j5ps8" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.947019 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/7bd0a6fb-88d4-4e0a-9ac1-d9334b2f91b8-nmstate-lock\") pod \"nmstate-handler-lmln5\" (UID: \"7bd0a6fb-88d4-4e0a-9ac1-d9334b2f91b8\") " pod="openshift-nmstate/nmstate-handler-lmln5" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.947048 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vxs6\" (UniqueName: \"kubernetes.io/projected/7bd0a6fb-88d4-4e0a-9ac1-d9334b2f91b8-kube-api-access-8vxs6\") pod \"nmstate-handler-lmln5\" (UID: \"7bd0a6fb-88d4-4e0a-9ac1-d9334b2f91b8\") " pod="openshift-nmstate/nmstate-handler-lmln5" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.947217 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/7bd0a6fb-88d4-4e0a-9ac1-d9334b2f91b8-ovs-socket\") pod \"nmstate-handler-lmln5\" (UID: \"7bd0a6fb-88d4-4e0a-9ac1-d9334b2f91b8\") " pod="openshift-nmstate/nmstate-handler-lmln5" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.947260 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/7bd0a6fb-88d4-4e0a-9ac1-d9334b2f91b8-dbus-socket\") pod \"nmstate-handler-lmln5\" (UID: \"7bd0a6fb-88d4-4e0a-9ac1-d9334b2f91b8\") " pod="openshift-nmstate/nmstate-handler-lmln5" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.947306 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/7bd0a6fb-88d4-4e0a-9ac1-d9334b2f91b8-ovs-socket\") pod \"nmstate-handler-lmln5\" (UID: \"7bd0a6fb-88d4-4e0a-9ac1-d9334b2f91b8\") " pod="openshift-nmstate/nmstate-handler-lmln5" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.947347 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpds2\" (UniqueName: \"kubernetes.io/projected/96df2436-0a55-4b21-900b-dfedbafa290d-kube-api-access-wpds2\") pod \"nmstate-console-plugin-7fbb5f6569-hbrt4\" (UID: \"96df2436-0a55-4b21-900b-dfedbafa290d\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hbrt4" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.947378 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c535354b-ac85-4a30-9f7d-1547f2db8fbc-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-j5ps8\" (UID: \"c535354b-ac85-4a30-9f7d-1547f2db8fbc\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j5ps8" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.947413 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/96df2436-0a55-4b21-900b-dfedbafa290d-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-hbrt4\" (UID: \"96df2436-0a55-4b21-900b-dfedbafa290d\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hbrt4" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.965558 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c535354b-ac85-4a30-9f7d-1547f2db8fbc-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-j5ps8\" (UID: \"c535354b-ac85-4a30-9f7d-1547f2db8fbc\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j5ps8" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.970660 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vxs6\" (UniqueName: \"kubernetes.io/projected/7bd0a6fb-88d4-4e0a-9ac1-d9334b2f91b8-kube-api-access-8vxs6\") pod \"nmstate-handler-lmln5\" (UID: \"7bd0a6fb-88d4-4e0a-9ac1-d9334b2f91b8\") " pod="openshift-nmstate/nmstate-handler-lmln5" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.970882 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9j2d\" (UniqueName: \"kubernetes.io/projected/c535354b-ac85-4a30-9f7d-1547f2db8fbc-kube-api-access-p9j2d\") pod \"nmstate-webhook-5f6d4c5ccb-j5ps8\" (UID: \"c535354b-ac85-4a30-9f7d-1547f2db8fbc\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j5ps8" Dec 05 19:16:47 crc kubenswrapper[4828]: I1205 19:16:47.970926 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrw5s\" (UniqueName: \"kubernetes.io/projected/5e07f179-6cb8-4771-894c-7ad6c2ee6b10-kube-api-access-rrw5s\") pod \"nmstate-metrics-7f946cbc9-jzb4h\" (UID: \"5e07f179-6cb8-4771-894c-7ad6c2ee6b10\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jzb4h" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.048321 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpds2\" (UniqueName: \"kubernetes.io/projected/96df2436-0a55-4b21-900b-dfedbafa290d-kube-api-access-wpds2\") pod \"nmstate-console-plugin-7fbb5f6569-hbrt4\" (UID: \"96df2436-0a55-4b21-900b-dfedbafa290d\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hbrt4" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.048572 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/96df2436-0a55-4b21-900b-dfedbafa290d-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-hbrt4\" (UID: \"96df2436-0a55-4b21-900b-dfedbafa290d\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hbrt4" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.048616 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/96df2436-0a55-4b21-900b-dfedbafa290d-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-hbrt4\" (UID: \"96df2436-0a55-4b21-900b-dfedbafa290d\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hbrt4" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.049450 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/96df2436-0a55-4b21-900b-dfedbafa290d-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-hbrt4\" (UID: \"96df2436-0a55-4b21-900b-dfedbafa290d\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hbrt4" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.057196 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jzb4h" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.065952 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/96df2436-0a55-4b21-900b-dfedbafa290d-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-hbrt4\" (UID: \"96df2436-0a55-4b21-900b-dfedbafa290d\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hbrt4" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.079338 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpds2\" (UniqueName: \"kubernetes.io/projected/96df2436-0a55-4b21-900b-dfedbafa290d-kube-api-access-wpds2\") pod \"nmstate-console-plugin-7fbb5f6569-hbrt4\" (UID: \"96df2436-0a55-4b21-900b-dfedbafa290d\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hbrt4" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.096336 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-655b6b84f6-jhqjk"] Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.097091 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-655b6b84f6-jhqjk" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.105123 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j5ps8" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.107419 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-655b6b84f6-jhqjk"] Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.123512 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lmln5" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.149728 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea87b101-a75c-4b77-bf7e-3b2f709b84c1-trusted-ca-bundle\") pod \"console-655b6b84f6-jhqjk\" (UID: \"ea87b101-a75c-4b77-bf7e-3b2f709b84c1\") " pod="openshift-console/console-655b6b84f6-jhqjk" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.149761 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea87b101-a75c-4b77-bf7e-3b2f709b84c1-console-serving-cert\") pod \"console-655b6b84f6-jhqjk\" (UID: \"ea87b101-a75c-4b77-bf7e-3b2f709b84c1\") " pod="openshift-console/console-655b6b84f6-jhqjk" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.149792 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea87b101-a75c-4b77-bf7e-3b2f709b84c1-service-ca\") pod \"console-655b6b84f6-jhqjk\" (UID: \"ea87b101-a75c-4b77-bf7e-3b2f709b84c1\") " pod="openshift-console/console-655b6b84f6-jhqjk" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.149808 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmgjk\" (UniqueName: \"kubernetes.io/projected/ea87b101-a75c-4b77-bf7e-3b2f709b84c1-kube-api-access-pmgjk\") pod \"console-655b6b84f6-jhqjk\" (UID: \"ea87b101-a75c-4b77-bf7e-3b2f709b84c1\") " pod="openshift-console/console-655b6b84f6-jhqjk" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.149847 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ea87b101-a75c-4b77-bf7e-3b2f709b84c1-console-oauth-config\") pod \"console-655b6b84f6-jhqjk\" (UID: \"ea87b101-a75c-4b77-bf7e-3b2f709b84c1\") " pod="openshift-console/console-655b6b84f6-jhqjk" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.149874 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ea87b101-a75c-4b77-bf7e-3b2f709b84c1-oauth-serving-cert\") pod \"console-655b6b84f6-jhqjk\" (UID: \"ea87b101-a75c-4b77-bf7e-3b2f709b84c1\") " pod="openshift-console/console-655b6b84f6-jhqjk" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.149916 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ea87b101-a75c-4b77-bf7e-3b2f709b84c1-console-config\") pod \"console-655b6b84f6-jhqjk\" (UID: \"ea87b101-a75c-4b77-bf7e-3b2f709b84c1\") " pod="openshift-console/console-655b6b84f6-jhqjk" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.203542 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hbrt4" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.250542 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ea87b101-a75c-4b77-bf7e-3b2f709b84c1-console-config\") pod \"console-655b6b84f6-jhqjk\" (UID: \"ea87b101-a75c-4b77-bf7e-3b2f709b84c1\") " pod="openshift-console/console-655b6b84f6-jhqjk" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.250933 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea87b101-a75c-4b77-bf7e-3b2f709b84c1-trusted-ca-bundle\") pod \"console-655b6b84f6-jhqjk\" (UID: \"ea87b101-a75c-4b77-bf7e-3b2f709b84c1\") " pod="openshift-console/console-655b6b84f6-jhqjk" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.250949 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea87b101-a75c-4b77-bf7e-3b2f709b84c1-console-serving-cert\") pod \"console-655b6b84f6-jhqjk\" (UID: \"ea87b101-a75c-4b77-bf7e-3b2f709b84c1\") " pod="openshift-console/console-655b6b84f6-jhqjk" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.250981 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea87b101-a75c-4b77-bf7e-3b2f709b84c1-service-ca\") pod \"console-655b6b84f6-jhqjk\" (UID: \"ea87b101-a75c-4b77-bf7e-3b2f709b84c1\") " pod="openshift-console/console-655b6b84f6-jhqjk" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.250997 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmgjk\" (UniqueName: \"kubernetes.io/projected/ea87b101-a75c-4b77-bf7e-3b2f709b84c1-kube-api-access-pmgjk\") pod \"console-655b6b84f6-jhqjk\" (UID: \"ea87b101-a75c-4b77-bf7e-3b2f709b84c1\") " pod="openshift-console/console-655b6b84f6-jhqjk" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.251020 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ea87b101-a75c-4b77-bf7e-3b2f709b84c1-console-oauth-config\") pod \"console-655b6b84f6-jhqjk\" (UID: \"ea87b101-a75c-4b77-bf7e-3b2f709b84c1\") " pod="openshift-console/console-655b6b84f6-jhqjk" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.251071 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ea87b101-a75c-4b77-bf7e-3b2f709b84c1-oauth-serving-cert\") pod \"console-655b6b84f6-jhqjk\" (UID: \"ea87b101-a75c-4b77-bf7e-3b2f709b84c1\") " pod="openshift-console/console-655b6b84f6-jhqjk" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.253655 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ea87b101-a75c-4b77-bf7e-3b2f709b84c1-oauth-serving-cert\") pod \"console-655b6b84f6-jhqjk\" (UID: \"ea87b101-a75c-4b77-bf7e-3b2f709b84c1\") " pod="openshift-console/console-655b6b84f6-jhqjk" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.254262 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea87b101-a75c-4b77-bf7e-3b2f709b84c1-service-ca\") pod \"console-655b6b84f6-jhqjk\" (UID: \"ea87b101-a75c-4b77-bf7e-3b2f709b84c1\") " pod="openshift-console/console-655b6b84f6-jhqjk" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.259678 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ea87b101-a75c-4b77-bf7e-3b2f709b84c1-console-oauth-config\") pod \"console-655b6b84f6-jhqjk\" (UID: \"ea87b101-a75c-4b77-bf7e-3b2f709b84c1\") " pod="openshift-console/console-655b6b84f6-jhqjk" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.260454 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea87b101-a75c-4b77-bf7e-3b2f709b84c1-console-serving-cert\") pod \"console-655b6b84f6-jhqjk\" (UID: \"ea87b101-a75c-4b77-bf7e-3b2f709b84c1\") " pod="openshift-console/console-655b6b84f6-jhqjk" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.264002 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea87b101-a75c-4b77-bf7e-3b2f709b84c1-trusted-ca-bundle\") pod \"console-655b6b84f6-jhqjk\" (UID: \"ea87b101-a75c-4b77-bf7e-3b2f709b84c1\") " pod="openshift-console/console-655b6b84f6-jhqjk" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.265392 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ea87b101-a75c-4b77-bf7e-3b2f709b84c1-console-config\") pod \"console-655b6b84f6-jhqjk\" (UID: \"ea87b101-a75c-4b77-bf7e-3b2f709b84c1\") " pod="openshift-console/console-655b6b84f6-jhqjk" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.272045 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmgjk\" (UniqueName: \"kubernetes.io/projected/ea87b101-a75c-4b77-bf7e-3b2f709b84c1-kube-api-access-pmgjk\") pod \"console-655b6b84f6-jhqjk\" (UID: \"ea87b101-a75c-4b77-bf7e-3b2f709b84c1\") " pod="openshift-console/console-655b6b84f6-jhqjk" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.350736 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-jzb4h"] Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.378257 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lmln5" event={"ID":"7bd0a6fb-88d4-4e0a-9ac1-d9334b2f91b8","Type":"ContainerStarted","Data":"287bb960c87e5949ddb5cedbf7f0afc447c327bc18c2ee54757bd07e0fe78362"} Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.401511 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j5ps8"] Dec 05 19:16:48 crc kubenswrapper[4828]: W1205 19:16:48.408329 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc535354b_ac85_4a30_9f7d_1547f2db8fbc.slice/crio-971400525d77e2667a2c4ee316c34ce81243d49bea06879e93899b311923c6a0 WatchSource:0}: Error finding container 971400525d77e2667a2c4ee316c34ce81243d49bea06879e93899b311923c6a0: Status 404 returned error can't find the container with id 971400525d77e2667a2c4ee316c34ce81243d49bea06879e93899b311923c6a0 Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.411450 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-655b6b84f6-jhqjk" Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.453160 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hbrt4"] Dec 05 19:16:48 crc kubenswrapper[4828]: W1205 19:16:48.455234 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96df2436_0a55_4b21_900b_dfedbafa290d.slice/crio-2814fe8bca485c27e1d2712f048bdd10e694de66f3bcf4660e5b1beeb47607e1 WatchSource:0}: Error finding container 2814fe8bca485c27e1d2712f048bdd10e694de66f3bcf4660e5b1beeb47607e1: Status 404 returned error can't find the container with id 2814fe8bca485c27e1d2712f048bdd10e694de66f3bcf4660e5b1beeb47607e1 Dec 05 19:16:48 crc kubenswrapper[4828]: I1205 19:16:48.850933 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-655b6b84f6-jhqjk"] Dec 05 19:16:48 crc kubenswrapper[4828]: W1205 19:16:48.858342 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea87b101_a75c_4b77_bf7e_3b2f709b84c1.slice/crio-bb8f302341e10f39a06f6741b07f296aa7ecc5207af8f2c9e4ceeca7e6aa2b69 WatchSource:0}: Error finding container bb8f302341e10f39a06f6741b07f296aa7ecc5207af8f2c9e4ceeca7e6aa2b69: Status 404 returned error can't find the container with id bb8f302341e10f39a06f6741b07f296aa7ecc5207af8f2c9e4ceeca7e6aa2b69 Dec 05 19:16:49 crc kubenswrapper[4828]: I1205 19:16:49.386365 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jzb4h" event={"ID":"5e07f179-6cb8-4771-894c-7ad6c2ee6b10","Type":"ContainerStarted","Data":"ed2eece8f9d692ee9b0493b802b0e5503bc4d247ce9baebc4be482f6793330af"} Dec 05 19:16:49 crc kubenswrapper[4828]: I1205 19:16:49.389162 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hbrt4" event={"ID":"96df2436-0a55-4b21-900b-dfedbafa290d","Type":"ContainerStarted","Data":"2814fe8bca485c27e1d2712f048bdd10e694de66f3bcf4660e5b1beeb47607e1"} Dec 05 19:16:49 crc kubenswrapper[4828]: I1205 19:16:49.390756 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j5ps8" event={"ID":"c535354b-ac85-4a30-9f7d-1547f2db8fbc","Type":"ContainerStarted","Data":"971400525d77e2667a2c4ee316c34ce81243d49bea06879e93899b311923c6a0"} Dec 05 19:16:49 crc kubenswrapper[4828]: I1205 19:16:49.391972 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-655b6b84f6-jhqjk" event={"ID":"ea87b101-a75c-4b77-bf7e-3b2f709b84c1","Type":"ContainerStarted","Data":"bb8f302341e10f39a06f6741b07f296aa7ecc5207af8f2c9e4ceeca7e6aa2b69"} Dec 05 19:16:49 crc kubenswrapper[4828]: I1205 19:16:49.980138 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p5pxx" Dec 05 19:16:50 crc kubenswrapper[4828]: I1205 19:16:50.020231 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p5pxx" Dec 05 19:16:50 crc kubenswrapper[4828]: I1205 19:16:50.214657 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p5pxx"] Dec 05 19:16:50 crc kubenswrapper[4828]: I1205 19:16:50.398572 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-655b6b84f6-jhqjk" event={"ID":"ea87b101-a75c-4b77-bf7e-3b2f709b84c1","Type":"ContainerStarted","Data":"8259d6205f84cec5c89c0ac8b2aa88d5257207ba37d65aa68db2b8e6ae4e69d4"} Dec 05 19:16:50 crc kubenswrapper[4828]: I1205 19:16:50.415083 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-655b6b84f6-jhqjk" podStartSLOduration=2.415062026 podStartE2EDuration="2.415062026s" podCreationTimestamp="2025-12-05 19:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:16:50.41372363 +0000 UTC m=+788.308945936" watchObservedRunningTime="2025-12-05 19:16:50.415062026 +0000 UTC m=+788.310284332" Dec 05 19:16:51 crc kubenswrapper[4828]: I1205 19:16:51.403473 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p5pxx" podUID="f55f684a-dad3-467c-a32d-544c0c2439b4" containerName="registry-server" containerID="cri-o://a398787b0a26fe19af2503c25bc374336ec360896be427712cce55633539dcdc" gracePeriod=2 Dec 05 19:16:52 crc kubenswrapper[4828]: I1205 19:16:52.414513 4828 generic.go:334] "Generic (PLEG): container finished" podID="f55f684a-dad3-467c-a32d-544c0c2439b4" containerID="a398787b0a26fe19af2503c25bc374336ec360896be427712cce55633539dcdc" exitCode=0 Dec 05 19:16:52 crc kubenswrapper[4828]: I1205 19:16:52.414685 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p5pxx" event={"ID":"f55f684a-dad3-467c-a32d-544c0c2439b4","Type":"ContainerDied","Data":"a398787b0a26fe19af2503c25bc374336ec360896be427712cce55633539dcdc"} Dec 05 19:16:52 crc kubenswrapper[4828]: I1205 19:16:52.415118 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p5pxx" event={"ID":"f55f684a-dad3-467c-a32d-544c0c2439b4","Type":"ContainerDied","Data":"8e779d2dfcf082d32a459052dcd5aa1168b46ab9b05c2d7ab44f7efadee7b1a3"} Dec 05 19:16:52 crc kubenswrapper[4828]: I1205 19:16:52.415133 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e779d2dfcf082d32a459052dcd5aa1168b46ab9b05c2d7ab44f7efadee7b1a3" Dec 05 19:16:52 crc kubenswrapper[4828]: I1205 19:16:52.417053 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hbrt4" event={"ID":"96df2436-0a55-4b21-900b-dfedbafa290d","Type":"ContainerStarted","Data":"1a2222cb43451156f341325d6d2c0e70762fcda32c02262587ef6a01bfd8561b"} Dec 05 19:16:52 crc kubenswrapper[4828]: I1205 19:16:52.484485 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hbrt4" podStartSLOduration=1.703912699 podStartE2EDuration="5.484459514s" podCreationTimestamp="2025-12-05 19:16:47 +0000 UTC" firstStartedPulling="2025-12-05 19:16:48.45746015 +0000 UTC m=+786.352682456" lastFinishedPulling="2025-12-05 19:16:52.238006955 +0000 UTC m=+790.133229271" observedRunningTime="2025-12-05 19:16:52.47911632 +0000 UTC m=+790.374338656" watchObservedRunningTime="2025-12-05 19:16:52.484459514 +0000 UTC m=+790.379681830" Dec 05 19:16:52 crc kubenswrapper[4828]: I1205 19:16:52.586218 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p5pxx" Dec 05 19:16:52 crc kubenswrapper[4828]: I1205 19:16:52.607554 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fds2\" (UniqueName: \"kubernetes.io/projected/f55f684a-dad3-467c-a32d-544c0c2439b4-kube-api-access-5fds2\") pod \"f55f684a-dad3-467c-a32d-544c0c2439b4\" (UID: \"f55f684a-dad3-467c-a32d-544c0c2439b4\") " Dec 05 19:16:52 crc kubenswrapper[4828]: I1205 19:16:52.607628 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f55f684a-dad3-467c-a32d-544c0c2439b4-utilities\") pod \"f55f684a-dad3-467c-a32d-544c0c2439b4\" (UID: \"f55f684a-dad3-467c-a32d-544c0c2439b4\") " Dec 05 19:16:52 crc kubenswrapper[4828]: I1205 19:16:52.607690 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f55f684a-dad3-467c-a32d-544c0c2439b4-catalog-content\") pod \"f55f684a-dad3-467c-a32d-544c0c2439b4\" (UID: \"f55f684a-dad3-467c-a32d-544c0c2439b4\") " Dec 05 19:16:52 crc kubenswrapper[4828]: I1205 19:16:52.608983 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f55f684a-dad3-467c-a32d-544c0c2439b4-utilities" (OuterVolumeSpecName: "utilities") pod "f55f684a-dad3-467c-a32d-544c0c2439b4" (UID: "f55f684a-dad3-467c-a32d-544c0c2439b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:16:52 crc kubenswrapper[4828]: I1205 19:16:52.615126 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f55f684a-dad3-467c-a32d-544c0c2439b4-kube-api-access-5fds2" (OuterVolumeSpecName: "kube-api-access-5fds2") pod "f55f684a-dad3-467c-a32d-544c0c2439b4" (UID: "f55f684a-dad3-467c-a32d-544c0c2439b4"). InnerVolumeSpecName "kube-api-access-5fds2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:16:52 crc kubenswrapper[4828]: I1205 19:16:52.706368 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f55f684a-dad3-467c-a32d-544c0c2439b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f55f684a-dad3-467c-a32d-544c0c2439b4" (UID: "f55f684a-dad3-467c-a32d-544c0c2439b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:16:52 crc kubenswrapper[4828]: I1205 19:16:52.708900 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f55f684a-dad3-467c-a32d-544c0c2439b4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 19:16:52 crc kubenswrapper[4828]: I1205 19:16:52.708930 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fds2\" (UniqueName: \"kubernetes.io/projected/f55f684a-dad3-467c-a32d-544c0c2439b4-kube-api-access-5fds2\") on node \"crc\" DevicePath \"\"" Dec 05 19:16:52 crc kubenswrapper[4828]: I1205 19:16:52.708945 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f55f684a-dad3-467c-a32d-544c0c2439b4-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 19:16:53 crc kubenswrapper[4828]: I1205 19:16:53.425193 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jzb4h" event={"ID":"5e07f179-6cb8-4771-894c-7ad6c2ee6b10","Type":"ContainerStarted","Data":"de98ef592652b58a8f1a75e74ab64c132baa8d40509729cd70ce865c7e1e9b81"} Dec 05 19:16:53 crc kubenswrapper[4828]: I1205 19:16:53.426491 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j5ps8" event={"ID":"c535354b-ac85-4a30-9f7d-1547f2db8fbc","Type":"ContainerStarted","Data":"c0dbab6333df86142cbb039e4cf38be1a6992a9c1ba26ec6e6b5d7fa17522391"} Dec 05 19:16:53 crc kubenswrapper[4828]: I1205 19:16:53.426961 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j5ps8" Dec 05 19:16:53 crc kubenswrapper[4828]: I1205 19:16:53.431242 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lmln5" event={"ID":"7bd0a6fb-88d4-4e0a-9ac1-d9334b2f91b8","Type":"ContainerStarted","Data":"b29c58c75b6b5589a23b0209f52dbb02fee7c1e8971be1bb40d24bef31e4fddf"} Dec 05 19:16:53 crc kubenswrapper[4828]: I1205 19:16:53.431306 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p5pxx" Dec 05 19:16:53 crc kubenswrapper[4828]: I1205 19:16:53.450911 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j5ps8" podStartSLOduration=2.55590375 podStartE2EDuration="6.450897239s" podCreationTimestamp="2025-12-05 19:16:47 +0000 UTC" firstStartedPulling="2025-12-05 19:16:48.412441437 +0000 UTC m=+786.307663743" lastFinishedPulling="2025-12-05 19:16:52.307434926 +0000 UTC m=+790.202657232" observedRunningTime="2025-12-05 19:16:53.450692074 +0000 UTC m=+791.345914420" watchObservedRunningTime="2025-12-05 19:16:53.450897239 +0000 UTC m=+791.346119545" Dec 05 19:16:53 crc kubenswrapper[4828]: I1205 19:16:53.476994 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-lmln5" podStartSLOduration=2.302473215 podStartE2EDuration="6.476971242s" podCreationTimestamp="2025-12-05 19:16:47 +0000 UTC" firstStartedPulling="2025-12-05 19:16:48.146106273 +0000 UTC m=+786.041328579" lastFinishedPulling="2025-12-05 19:16:52.3206043 +0000 UTC m=+790.215826606" observedRunningTime="2025-12-05 19:16:53.474522576 +0000 UTC m=+791.369744892" watchObservedRunningTime="2025-12-05 19:16:53.476971242 +0000 UTC m=+791.372193558" Dec 05 19:16:53 crc kubenswrapper[4828]: I1205 19:16:53.493516 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p5pxx"] Dec 05 19:16:53 crc kubenswrapper[4828]: I1205 19:16:53.499115 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p5pxx"] Dec 05 19:16:54 crc kubenswrapper[4828]: I1205 19:16:54.438809 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-lmln5" Dec 05 19:16:54 crc kubenswrapper[4828]: I1205 19:16:54.454369 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f55f684a-dad3-467c-a32d-544c0c2439b4" path="/var/lib/kubelet/pods/f55f684a-dad3-467c-a32d-544c0c2439b4/volumes" Dec 05 19:16:55 crc kubenswrapper[4828]: I1205 19:16:55.446392 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jzb4h" event={"ID":"5e07f179-6cb8-4771-894c-7ad6c2ee6b10","Type":"ContainerStarted","Data":"647d902aa6f9c8e1a0b069fc72884ebf3425680ca388e6fbc8db8ad2805ee0fb"} Dec 05 19:16:55 crc kubenswrapper[4828]: I1205 19:16:55.470875 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jzb4h" podStartSLOduration=2.508191266 podStartE2EDuration="8.470816895s" podCreationTimestamp="2025-12-05 19:16:47 +0000 UTC" firstStartedPulling="2025-12-05 19:16:48.373539759 +0000 UTC m=+786.268762065" lastFinishedPulling="2025-12-05 19:16:54.336165388 +0000 UTC m=+792.231387694" observedRunningTime="2025-12-05 19:16:55.466012866 +0000 UTC m=+793.361235162" watchObservedRunningTime="2025-12-05 19:16:55.470816895 +0000 UTC m=+793.366039241" Dec 05 19:16:58 crc kubenswrapper[4828]: I1205 19:16:58.150305 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-lmln5" Dec 05 19:16:58 crc kubenswrapper[4828]: I1205 19:16:58.412536 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-655b6b84f6-jhqjk" Dec 05 19:16:58 crc kubenswrapper[4828]: I1205 19:16:58.412976 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-655b6b84f6-jhqjk" Dec 05 19:16:58 crc kubenswrapper[4828]: I1205 19:16:58.416969 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-655b6b84f6-jhqjk" Dec 05 19:16:58 crc kubenswrapper[4828]: I1205 19:16:58.473203 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-655b6b84f6-jhqjk" Dec 05 19:16:58 crc kubenswrapper[4828]: I1205 19:16:58.532055 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-q9sfv"] Dec 05 19:17:05 crc kubenswrapper[4828]: I1205 19:17:05.259372 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:17:05 crc kubenswrapper[4828]: I1205 19:17:05.259925 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:17:05 crc kubenswrapper[4828]: I1205 19:17:05.259983 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" Dec 05 19:17:05 crc kubenswrapper[4828]: I1205 19:17:05.260647 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6e314ace055f344d073229b86b5faa8f9693ed01502a72c37b8b7db2eef860a3"} pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 19:17:05 crc kubenswrapper[4828]: I1205 19:17:05.260704 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" containerID="cri-o://6e314ace055f344d073229b86b5faa8f9693ed01502a72c37b8b7db2eef860a3" gracePeriod=600 Dec 05 19:17:05 crc kubenswrapper[4828]: I1205 19:17:05.521490 4828 generic.go:334] "Generic (PLEG): container finished" podID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerID="6e314ace055f344d073229b86b5faa8f9693ed01502a72c37b8b7db2eef860a3" exitCode=0 Dec 05 19:17:05 crc kubenswrapper[4828]: I1205 19:17:05.521567 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerDied","Data":"6e314ace055f344d073229b86b5faa8f9693ed01502a72c37b8b7db2eef860a3"} Dec 05 19:17:05 crc kubenswrapper[4828]: I1205 19:17:05.521627 4828 scope.go:117] "RemoveContainer" containerID="8bb75c4c0ebf5117e84bd2908811ecc4d1acf37d442ad533ca9f795d77cc15ba" Dec 05 19:17:06 crc kubenswrapper[4828]: I1205 19:17:06.528745 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerStarted","Data":"ba6c96d79cafa37f2c2c4a1d891acafd85624229c151c0bd90de50b84f8cad3b"} Dec 05 19:17:08 crc kubenswrapper[4828]: I1205 19:17:08.112040 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-j5ps8" Dec 05 19:17:22 crc kubenswrapper[4828]: I1205 19:17:22.624514 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz"] Dec 05 19:17:22 crc kubenswrapper[4828]: E1205 19:17:22.626758 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f55f684a-dad3-467c-a32d-544c0c2439b4" containerName="registry-server" Dec 05 19:17:22 crc kubenswrapper[4828]: I1205 19:17:22.626911 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f55f684a-dad3-467c-a32d-544c0c2439b4" containerName="registry-server" Dec 05 19:17:22 crc kubenswrapper[4828]: E1205 19:17:22.627054 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f55f684a-dad3-467c-a32d-544c0c2439b4" containerName="extract-content" Dec 05 19:17:22 crc kubenswrapper[4828]: I1205 19:17:22.627159 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f55f684a-dad3-467c-a32d-544c0c2439b4" containerName="extract-content" Dec 05 19:17:22 crc kubenswrapper[4828]: E1205 19:17:22.627271 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f55f684a-dad3-467c-a32d-544c0c2439b4" containerName="extract-utilities" Dec 05 19:17:22 crc kubenswrapper[4828]: I1205 19:17:22.627372 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f55f684a-dad3-467c-a32d-544c0c2439b4" containerName="extract-utilities" Dec 05 19:17:22 crc kubenswrapper[4828]: I1205 19:17:22.627679 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="f55f684a-dad3-467c-a32d-544c0c2439b4" containerName="registry-server" Dec 05 19:17:22 crc kubenswrapper[4828]: I1205 19:17:22.629051 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz" Dec 05 19:17:22 crc kubenswrapper[4828]: I1205 19:17:22.631194 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 05 19:17:22 crc kubenswrapper[4828]: I1205 19:17:22.632701 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz"] Dec 05 19:17:22 crc kubenswrapper[4828]: I1205 19:17:22.829641 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz\" (UID: \"80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz" Dec 05 19:17:22 crc kubenswrapper[4828]: I1205 19:17:22.829688 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz\" (UID: \"80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz" Dec 05 19:17:22 crc kubenswrapper[4828]: I1205 19:17:22.829920 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqqz2\" (UniqueName: \"kubernetes.io/projected/80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0-kube-api-access-qqqz2\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz\" (UID: \"80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz" Dec 05 19:17:22 crc kubenswrapper[4828]: I1205 19:17:22.931689 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqqz2\" (UniqueName: \"kubernetes.io/projected/80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0-kube-api-access-qqqz2\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz\" (UID: \"80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz" Dec 05 19:17:22 crc kubenswrapper[4828]: I1205 19:17:22.931774 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz\" (UID: \"80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz" Dec 05 19:17:22 crc kubenswrapper[4828]: I1205 19:17:22.931818 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz\" (UID: \"80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz" Dec 05 19:17:22 crc kubenswrapper[4828]: I1205 19:17:22.932522 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz\" (UID: \"80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz" Dec 05 19:17:22 crc kubenswrapper[4828]: I1205 19:17:22.932676 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz\" (UID: \"80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz" Dec 05 19:17:22 crc kubenswrapper[4828]: I1205 19:17:22.961270 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqqz2\" (UniqueName: \"kubernetes.io/projected/80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0-kube-api-access-qqqz2\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz\" (UID: \"80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz" Dec 05 19:17:22 crc kubenswrapper[4828]: I1205 19:17:22.994511 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz" Dec 05 19:17:23 crc kubenswrapper[4828]: I1205 19:17:23.428502 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz"] Dec 05 19:17:23 crc kubenswrapper[4828]: W1205 19:17:23.436715 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80fa4fac_e710_4e46_8bd8_c8b4cb35b9f0.slice/crio-ec426cc948974f04cbafdb7dd274d7590975e710a2069c3a9ad1a0f277b27a63 WatchSource:0}: Error finding container ec426cc948974f04cbafdb7dd274d7590975e710a2069c3a9ad1a0f277b27a63: Status 404 returned error can't find the container with id ec426cc948974f04cbafdb7dd274d7590975e710a2069c3a9ad1a0f277b27a63 Dec 05 19:17:23 crc kubenswrapper[4828]: I1205 19:17:23.581107 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-q9sfv" podUID="4f8576a7-5291-4b1f-a06c-35395fa9c9dd" containerName="console" containerID="cri-o://eab0d133a19f7ed97abb6f5f5241d7de7a6b937642bf5c6eb02d6574f6b0ab84" gracePeriod=15 Dec 05 19:17:23 crc kubenswrapper[4828]: I1205 19:17:23.644517 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz" event={"ID":"80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0","Type":"ContainerStarted","Data":"ec426cc948974f04cbafdb7dd274d7590975e710a2069c3a9ad1a0f277b27a63"} Dec 05 19:17:24 crc kubenswrapper[4828]: I1205 19:17:24.652390 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-q9sfv_4f8576a7-5291-4b1f-a06c-35395fa9c9dd/console/0.log" Dec 05 19:17:24 crc kubenswrapper[4828]: I1205 19:17:24.652455 4828 generic.go:334] "Generic (PLEG): container finished" podID="4f8576a7-5291-4b1f-a06c-35395fa9c9dd" containerID="eab0d133a19f7ed97abb6f5f5241d7de7a6b937642bf5c6eb02d6574f6b0ab84" exitCode=2 Dec 05 19:17:24 crc kubenswrapper[4828]: I1205 19:17:24.652526 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-q9sfv" event={"ID":"4f8576a7-5291-4b1f-a06c-35395fa9c9dd","Type":"ContainerDied","Data":"eab0d133a19f7ed97abb6f5f5241d7de7a6b937642bf5c6eb02d6574f6b0ab84"} Dec 05 19:17:24 crc kubenswrapper[4828]: I1205 19:17:24.653782 4828 generic.go:334] "Generic (PLEG): container finished" podID="80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0" containerID="e2bb077198aec544214713441b9893d7eadea81e535418c904352ba2faf483a2" exitCode=0 Dec 05 19:17:24 crc kubenswrapper[4828]: I1205 19:17:24.653837 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz" event={"ID":"80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0","Type":"ContainerDied","Data":"e2bb077198aec544214713441b9893d7eadea81e535418c904352ba2faf483a2"} Dec 05 19:17:25 crc kubenswrapper[4828]: I1205 19:17:25.128206 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-q9sfv_4f8576a7-5291-4b1f-a06c-35395fa9c9dd/console/0.log" Dec 05 19:17:25 crc kubenswrapper[4828]: I1205 19:17:25.128634 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-q9sfv" Dec 05 19:17:25 crc kubenswrapper[4828]: I1205 19:17:25.260450 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-console-oauth-config\") pod \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\" (UID: \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\") " Dec 05 19:17:25 crc kubenswrapper[4828]: I1205 19:17:25.260553 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-console-serving-cert\") pod \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\" (UID: \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\") " Dec 05 19:17:25 crc kubenswrapper[4828]: I1205 19:17:25.261600 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-service-ca\") pod \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\" (UID: \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\") " Dec 05 19:17:25 crc kubenswrapper[4828]: I1205 19:17:25.261638 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-oauth-serving-cert\") pod \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\" (UID: \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\") " Dec 05 19:17:25 crc kubenswrapper[4828]: I1205 19:17:25.261658 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-console-config\") pod \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\" (UID: \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\") " Dec 05 19:17:25 crc kubenswrapper[4828]: I1205 19:17:25.261677 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-trusted-ca-bundle\") pod \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\" (UID: \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\") " Dec 05 19:17:25 crc kubenswrapper[4828]: I1205 19:17:25.261697 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99fg2\" (UniqueName: \"kubernetes.io/projected/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-kube-api-access-99fg2\") pod \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\" (UID: \"4f8576a7-5291-4b1f-a06c-35395fa9c9dd\") " Dec 05 19:17:25 crc kubenswrapper[4828]: I1205 19:17:25.262444 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-service-ca" (OuterVolumeSpecName: "service-ca") pod "4f8576a7-5291-4b1f-a06c-35395fa9c9dd" (UID: "4f8576a7-5291-4b1f-a06c-35395fa9c9dd"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:17:25 crc kubenswrapper[4828]: I1205 19:17:25.262487 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-console-config" (OuterVolumeSpecName: "console-config") pod "4f8576a7-5291-4b1f-a06c-35395fa9c9dd" (UID: "4f8576a7-5291-4b1f-a06c-35395fa9c9dd"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:17:25 crc kubenswrapper[4828]: I1205 19:17:25.262764 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "4f8576a7-5291-4b1f-a06c-35395fa9c9dd" (UID: "4f8576a7-5291-4b1f-a06c-35395fa9c9dd"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:17:25 crc kubenswrapper[4828]: I1205 19:17:25.262924 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "4f8576a7-5291-4b1f-a06c-35395fa9c9dd" (UID: "4f8576a7-5291-4b1f-a06c-35395fa9c9dd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:17:25 crc kubenswrapper[4828]: I1205 19:17:25.266416 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-kube-api-access-99fg2" (OuterVolumeSpecName: "kube-api-access-99fg2") pod "4f8576a7-5291-4b1f-a06c-35395fa9c9dd" (UID: "4f8576a7-5291-4b1f-a06c-35395fa9c9dd"). InnerVolumeSpecName "kube-api-access-99fg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:17:25 crc kubenswrapper[4828]: I1205 19:17:25.266461 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "4f8576a7-5291-4b1f-a06c-35395fa9c9dd" (UID: "4f8576a7-5291-4b1f-a06c-35395fa9c9dd"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:17:25 crc kubenswrapper[4828]: I1205 19:17:25.266860 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "4f8576a7-5291-4b1f-a06c-35395fa9c9dd" (UID: "4f8576a7-5291-4b1f-a06c-35395fa9c9dd"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:17:25 crc kubenswrapper[4828]: I1205 19:17:25.362592 4828 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:17:25 crc kubenswrapper[4828]: I1205 19:17:25.362641 4828 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 05 19:17:25 crc kubenswrapper[4828]: I1205 19:17:25.362653 4828 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 19:17:25 crc kubenswrapper[4828]: I1205 19:17:25.362666 4828 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-console-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:17:25 crc kubenswrapper[4828]: I1205 19:17:25.362678 4828 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:17:25 crc kubenswrapper[4828]: I1205 19:17:25.362687 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99fg2\" (UniqueName: \"kubernetes.io/projected/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-kube-api-access-99fg2\") on node \"crc\" DevicePath \"\"" Dec 05 19:17:25 crc kubenswrapper[4828]: I1205 19:17:25.362697 4828 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4f8576a7-5291-4b1f-a06c-35395fa9c9dd-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:17:25 crc kubenswrapper[4828]: I1205 19:17:25.661190 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-q9sfv_4f8576a7-5291-4b1f-a06c-35395fa9c9dd/console/0.log" Dec 05 19:17:25 crc kubenswrapper[4828]: I1205 19:17:25.661253 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-q9sfv" event={"ID":"4f8576a7-5291-4b1f-a06c-35395fa9c9dd","Type":"ContainerDied","Data":"e09842cab43510e669c6b30aa68d7430956fbe0543b77459cc8b5bac63a4134a"} Dec 05 19:17:25 crc kubenswrapper[4828]: I1205 19:17:25.661293 4828 scope.go:117] "RemoveContainer" containerID="eab0d133a19f7ed97abb6f5f5241d7de7a6b937642bf5c6eb02d6574f6b0ab84" Dec 05 19:17:25 crc kubenswrapper[4828]: I1205 19:17:25.661339 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-q9sfv" Dec 05 19:17:25 crc kubenswrapper[4828]: I1205 19:17:25.698541 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-q9sfv"] Dec 05 19:17:25 crc kubenswrapper[4828]: I1205 19:17:25.703299 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-q9sfv"] Dec 05 19:17:26 crc kubenswrapper[4828]: I1205 19:17:26.457346 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f8576a7-5291-4b1f-a06c-35395fa9c9dd" path="/var/lib/kubelet/pods/4f8576a7-5291-4b1f-a06c-35395fa9c9dd/volumes" Dec 05 19:17:26 crc kubenswrapper[4828]: I1205 19:17:26.685858 4828 generic.go:334] "Generic (PLEG): container finished" podID="80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0" containerID="a23786ed62e2d4b7d6fb41481f69aae07dd2035bb8d4e6c661ecc3388def468a" exitCode=0 Dec 05 19:17:26 crc kubenswrapper[4828]: I1205 19:17:26.685909 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz" event={"ID":"80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0","Type":"ContainerDied","Data":"a23786ed62e2d4b7d6fb41481f69aae07dd2035bb8d4e6c661ecc3388def468a"} Dec 05 19:17:27 crc kubenswrapper[4828]: I1205 19:17:27.699126 4828 generic.go:334] "Generic (PLEG): container finished" podID="80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0" containerID="e848f1cd8778359a3f09a15be9bc872bdfacac91833df5d72fb9cd2f21083f2d" exitCode=0 Dec 05 19:17:27 crc kubenswrapper[4828]: I1205 19:17:27.699194 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz" event={"ID":"80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0","Type":"ContainerDied","Data":"e848f1cd8778359a3f09a15be9bc872bdfacac91833df5d72fb9cd2f21083f2d"} Dec 05 19:17:29 crc kubenswrapper[4828]: I1205 19:17:29.054523 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz" Dec 05 19:17:29 crc kubenswrapper[4828]: I1205 19:17:29.213527 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqqz2\" (UniqueName: \"kubernetes.io/projected/80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0-kube-api-access-qqqz2\") pod \"80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0\" (UID: \"80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0\") " Dec 05 19:17:29 crc kubenswrapper[4828]: I1205 19:17:29.213578 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0-bundle\") pod \"80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0\" (UID: \"80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0\") " Dec 05 19:17:29 crc kubenswrapper[4828]: I1205 19:17:29.213624 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0-util\") pod \"80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0\" (UID: \"80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0\") " Dec 05 19:17:29 crc kubenswrapper[4828]: I1205 19:17:29.214926 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0-bundle" (OuterVolumeSpecName: "bundle") pod "80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0" (UID: "80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:17:29 crc kubenswrapper[4828]: I1205 19:17:29.222468 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0-kube-api-access-qqqz2" (OuterVolumeSpecName: "kube-api-access-qqqz2") pod "80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0" (UID: "80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0"). InnerVolumeSpecName "kube-api-access-qqqz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:17:29 crc kubenswrapper[4828]: I1205 19:17:29.232019 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0-util" (OuterVolumeSpecName: "util") pod "80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0" (UID: "80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:17:29 crc kubenswrapper[4828]: I1205 19:17:29.314682 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqqz2\" (UniqueName: \"kubernetes.io/projected/80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0-kube-api-access-qqqz2\") on node \"crc\" DevicePath \"\"" Dec 05 19:17:29 crc kubenswrapper[4828]: I1205 19:17:29.314729 4828 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:17:29 crc kubenswrapper[4828]: I1205 19:17:29.314743 4828 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0-util\") on node \"crc\" DevicePath \"\"" Dec 05 19:17:29 crc kubenswrapper[4828]: I1205 19:17:29.716346 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz" event={"ID":"80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0","Type":"ContainerDied","Data":"ec426cc948974f04cbafdb7dd274d7590975e710a2069c3a9ad1a0f277b27a63"} Dec 05 19:17:29 crc kubenswrapper[4828]: I1205 19:17:29.716400 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec426cc948974f04cbafdb7dd274d7590975e710a2069c3a9ad1a0f277b27a63" Dec 05 19:17:29 crc kubenswrapper[4828]: I1205 19:17:29.716436 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz" Dec 05 19:17:40 crc kubenswrapper[4828]: I1205 19:17:40.745584 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-cccfd6bcb-v7d55"] Dec 05 19:17:40 crc kubenswrapper[4828]: E1205 19:17:40.746355 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0" containerName="pull" Dec 05 19:17:40 crc kubenswrapper[4828]: I1205 19:17:40.746371 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0" containerName="pull" Dec 05 19:17:40 crc kubenswrapper[4828]: E1205 19:17:40.746393 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f8576a7-5291-4b1f-a06c-35395fa9c9dd" containerName="console" Dec 05 19:17:40 crc kubenswrapper[4828]: I1205 19:17:40.746400 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f8576a7-5291-4b1f-a06c-35395fa9c9dd" containerName="console" Dec 05 19:17:40 crc kubenswrapper[4828]: E1205 19:17:40.746411 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0" containerName="util" Dec 05 19:17:40 crc kubenswrapper[4828]: I1205 19:17:40.746419 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0" containerName="util" Dec 05 19:17:40 crc kubenswrapper[4828]: E1205 19:17:40.746428 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0" containerName="extract" Dec 05 19:17:40 crc kubenswrapper[4828]: I1205 19:17:40.746435 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0" containerName="extract" Dec 05 19:17:40 crc kubenswrapper[4828]: I1205 19:17:40.746562 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f8576a7-5291-4b1f-a06c-35395fa9c9dd" containerName="console" Dec 05 19:17:40 crc kubenswrapper[4828]: I1205 19:17:40.746576 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0" containerName="extract" Dec 05 19:17:40 crc kubenswrapper[4828]: I1205 19:17:40.747062 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-cccfd6bcb-v7d55" Dec 05 19:17:40 crc kubenswrapper[4828]: I1205 19:17:40.751577 4828 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 05 19:17:40 crc kubenswrapper[4828]: I1205 19:17:40.752150 4828 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 05 19:17:40 crc kubenswrapper[4828]: I1205 19:17:40.752887 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 05 19:17:40 crc kubenswrapper[4828]: I1205 19:17:40.753054 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 05 19:17:40 crc kubenswrapper[4828]: I1205 19:17:40.753100 4828 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-gdcn9" Dec 05 19:17:40 crc kubenswrapper[4828]: I1205 19:17:40.763687 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-cccfd6bcb-v7d55"] Dec 05 19:17:40 crc kubenswrapper[4828]: I1205 19:17:40.775475 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xvbj\" (UniqueName: \"kubernetes.io/projected/b8cd7c76-c03d-4f9a-9e3e-d982c39d92c2-kube-api-access-6xvbj\") pod \"metallb-operator-controller-manager-cccfd6bcb-v7d55\" (UID: \"b8cd7c76-c03d-4f9a-9e3e-d982c39d92c2\") " pod="metallb-system/metallb-operator-controller-manager-cccfd6bcb-v7d55" Dec 05 19:17:40 crc kubenswrapper[4828]: I1205 19:17:40.775545 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b8cd7c76-c03d-4f9a-9e3e-d982c39d92c2-webhook-cert\") pod \"metallb-operator-controller-manager-cccfd6bcb-v7d55\" (UID: \"b8cd7c76-c03d-4f9a-9e3e-d982c39d92c2\") " pod="metallb-system/metallb-operator-controller-manager-cccfd6bcb-v7d55" Dec 05 19:17:40 crc kubenswrapper[4828]: I1205 19:17:40.775614 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b8cd7c76-c03d-4f9a-9e3e-d982c39d92c2-apiservice-cert\") pod \"metallb-operator-controller-manager-cccfd6bcb-v7d55\" (UID: \"b8cd7c76-c03d-4f9a-9e3e-d982c39d92c2\") " pod="metallb-system/metallb-operator-controller-manager-cccfd6bcb-v7d55" Dec 05 19:17:40 crc kubenswrapper[4828]: I1205 19:17:40.876765 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b8cd7c76-c03d-4f9a-9e3e-d982c39d92c2-webhook-cert\") pod \"metallb-operator-controller-manager-cccfd6bcb-v7d55\" (UID: \"b8cd7c76-c03d-4f9a-9e3e-d982c39d92c2\") " pod="metallb-system/metallb-operator-controller-manager-cccfd6bcb-v7d55" Dec 05 19:17:40 crc kubenswrapper[4828]: I1205 19:17:40.876939 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b8cd7c76-c03d-4f9a-9e3e-d982c39d92c2-apiservice-cert\") pod \"metallb-operator-controller-manager-cccfd6bcb-v7d55\" (UID: \"b8cd7c76-c03d-4f9a-9e3e-d982c39d92c2\") " pod="metallb-system/metallb-operator-controller-manager-cccfd6bcb-v7d55" Dec 05 19:17:40 crc kubenswrapper[4828]: I1205 19:17:40.877007 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xvbj\" (UniqueName: \"kubernetes.io/projected/b8cd7c76-c03d-4f9a-9e3e-d982c39d92c2-kube-api-access-6xvbj\") pod \"metallb-operator-controller-manager-cccfd6bcb-v7d55\" (UID: \"b8cd7c76-c03d-4f9a-9e3e-d982c39d92c2\") " pod="metallb-system/metallb-operator-controller-manager-cccfd6bcb-v7d55" Dec 05 19:17:40 crc kubenswrapper[4828]: I1205 19:17:40.883536 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b8cd7c76-c03d-4f9a-9e3e-d982c39d92c2-apiservice-cert\") pod \"metallb-operator-controller-manager-cccfd6bcb-v7d55\" (UID: \"b8cd7c76-c03d-4f9a-9e3e-d982c39d92c2\") " pod="metallb-system/metallb-operator-controller-manager-cccfd6bcb-v7d55" Dec 05 19:17:40 crc kubenswrapper[4828]: I1205 19:17:40.895912 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b8cd7c76-c03d-4f9a-9e3e-d982c39d92c2-webhook-cert\") pod \"metallb-operator-controller-manager-cccfd6bcb-v7d55\" (UID: \"b8cd7c76-c03d-4f9a-9e3e-d982c39d92c2\") " pod="metallb-system/metallb-operator-controller-manager-cccfd6bcb-v7d55" Dec 05 19:17:40 crc kubenswrapper[4828]: I1205 19:17:40.897188 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xvbj\" (UniqueName: \"kubernetes.io/projected/b8cd7c76-c03d-4f9a-9e3e-d982c39d92c2-kube-api-access-6xvbj\") pod \"metallb-operator-controller-manager-cccfd6bcb-v7d55\" (UID: \"b8cd7c76-c03d-4f9a-9e3e-d982c39d92c2\") " pod="metallb-system/metallb-operator-controller-manager-cccfd6bcb-v7d55" Dec 05 19:17:41 crc kubenswrapper[4828]: I1205 19:17:41.080611 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-cccfd6bcb-v7d55" Dec 05 19:17:41 crc kubenswrapper[4828]: I1205 19:17:41.165275 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-8f476869-jcl7n"] Dec 05 19:17:41 crc kubenswrapper[4828]: I1205 19:17:41.165993 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-8f476869-jcl7n" Dec 05 19:17:41 crc kubenswrapper[4828]: I1205 19:17:41.171236 4828 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 05 19:17:41 crc kubenswrapper[4828]: I1205 19:17:41.171265 4828 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 05 19:17:41 crc kubenswrapper[4828]: I1205 19:17:41.171279 4828 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-77z7f" Dec 05 19:17:41 crc kubenswrapper[4828]: I1205 19:17:41.180097 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/88807e96-2cf3-4ab4-863d-48538fac8bc8-apiservice-cert\") pod \"metallb-operator-webhook-server-8f476869-jcl7n\" (UID: \"88807e96-2cf3-4ab4-863d-48538fac8bc8\") " pod="metallb-system/metallb-operator-webhook-server-8f476869-jcl7n" Dec 05 19:17:41 crc kubenswrapper[4828]: I1205 19:17:41.180305 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/88807e96-2cf3-4ab4-863d-48538fac8bc8-webhook-cert\") pod \"metallb-operator-webhook-server-8f476869-jcl7n\" (UID: \"88807e96-2cf3-4ab4-863d-48538fac8bc8\") " pod="metallb-system/metallb-operator-webhook-server-8f476869-jcl7n" Dec 05 19:17:41 crc kubenswrapper[4828]: I1205 19:17:41.180357 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwmn6\" (UniqueName: \"kubernetes.io/projected/88807e96-2cf3-4ab4-863d-48538fac8bc8-kube-api-access-zwmn6\") pod \"metallb-operator-webhook-server-8f476869-jcl7n\" (UID: \"88807e96-2cf3-4ab4-863d-48538fac8bc8\") " pod="metallb-system/metallb-operator-webhook-server-8f476869-jcl7n" Dec 05 19:17:41 crc kubenswrapper[4828]: I1205 19:17:41.190464 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-8f476869-jcl7n"] Dec 05 19:17:41 crc kubenswrapper[4828]: I1205 19:17:41.288583 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/88807e96-2cf3-4ab4-863d-48538fac8bc8-apiservice-cert\") pod \"metallb-operator-webhook-server-8f476869-jcl7n\" (UID: \"88807e96-2cf3-4ab4-863d-48538fac8bc8\") " pod="metallb-system/metallb-operator-webhook-server-8f476869-jcl7n" Dec 05 19:17:41 crc kubenswrapper[4828]: I1205 19:17:41.288680 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/88807e96-2cf3-4ab4-863d-48538fac8bc8-webhook-cert\") pod \"metallb-operator-webhook-server-8f476869-jcl7n\" (UID: \"88807e96-2cf3-4ab4-863d-48538fac8bc8\") " pod="metallb-system/metallb-operator-webhook-server-8f476869-jcl7n" Dec 05 19:17:41 crc kubenswrapper[4828]: I1205 19:17:41.288705 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwmn6\" (UniqueName: \"kubernetes.io/projected/88807e96-2cf3-4ab4-863d-48538fac8bc8-kube-api-access-zwmn6\") pod \"metallb-operator-webhook-server-8f476869-jcl7n\" (UID: \"88807e96-2cf3-4ab4-863d-48538fac8bc8\") " pod="metallb-system/metallb-operator-webhook-server-8f476869-jcl7n" Dec 05 19:17:41 crc kubenswrapper[4828]: I1205 19:17:41.295526 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/88807e96-2cf3-4ab4-863d-48538fac8bc8-webhook-cert\") pod \"metallb-operator-webhook-server-8f476869-jcl7n\" (UID: \"88807e96-2cf3-4ab4-863d-48538fac8bc8\") " pod="metallb-system/metallb-operator-webhook-server-8f476869-jcl7n" Dec 05 19:17:41 crc kubenswrapper[4828]: I1205 19:17:41.297225 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/88807e96-2cf3-4ab4-863d-48538fac8bc8-apiservice-cert\") pod \"metallb-operator-webhook-server-8f476869-jcl7n\" (UID: \"88807e96-2cf3-4ab4-863d-48538fac8bc8\") " pod="metallb-system/metallb-operator-webhook-server-8f476869-jcl7n" Dec 05 19:17:41 crc kubenswrapper[4828]: I1205 19:17:41.315663 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwmn6\" (UniqueName: \"kubernetes.io/projected/88807e96-2cf3-4ab4-863d-48538fac8bc8-kube-api-access-zwmn6\") pod \"metallb-operator-webhook-server-8f476869-jcl7n\" (UID: \"88807e96-2cf3-4ab4-863d-48538fac8bc8\") " pod="metallb-system/metallb-operator-webhook-server-8f476869-jcl7n" Dec 05 19:17:41 crc kubenswrapper[4828]: I1205 19:17:41.479876 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-8f476869-jcl7n" Dec 05 19:17:41 crc kubenswrapper[4828]: I1205 19:17:41.642625 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-cccfd6bcb-v7d55"] Dec 05 19:17:41 crc kubenswrapper[4828]: W1205 19:17:41.649089 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8cd7c76_c03d_4f9a_9e3e_d982c39d92c2.slice/crio-a1871541078d172bdf3b1d5252546d98800caef7d50f7dec3dfe10f6b3e5bc8b WatchSource:0}: Error finding container a1871541078d172bdf3b1d5252546d98800caef7d50f7dec3dfe10f6b3e5bc8b: Status 404 returned error can't find the container with id a1871541078d172bdf3b1d5252546d98800caef7d50f7dec3dfe10f6b3e5bc8b Dec 05 19:17:41 crc kubenswrapper[4828]: I1205 19:17:41.739193 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-8f476869-jcl7n"] Dec 05 19:17:41 crc kubenswrapper[4828]: I1205 19:17:41.787449 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-cccfd6bcb-v7d55" event={"ID":"b8cd7c76-c03d-4f9a-9e3e-d982c39d92c2","Type":"ContainerStarted","Data":"a1871541078d172bdf3b1d5252546d98800caef7d50f7dec3dfe10f6b3e5bc8b"} Dec 05 19:17:41 crc kubenswrapper[4828]: I1205 19:17:41.788430 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-8f476869-jcl7n" event={"ID":"88807e96-2cf3-4ab4-863d-48538fac8bc8","Type":"ContainerStarted","Data":"60b71ceea430980b3076231fb2fe959d94a726d1f09692e0eb1591226fa2e392"} Dec 05 19:17:47 crc kubenswrapper[4828]: I1205 19:17:47.828772 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-8f476869-jcl7n" event={"ID":"88807e96-2cf3-4ab4-863d-48538fac8bc8","Type":"ContainerStarted","Data":"065b748d186a21088ad0afde0be44aeaba85040b24a3219f79a82aeb9c56bd7c"} Dec 05 19:17:47 crc kubenswrapper[4828]: I1205 19:17:47.829448 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-8f476869-jcl7n" Dec 05 19:17:47 crc kubenswrapper[4828]: I1205 19:17:47.830809 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-cccfd6bcb-v7d55" event={"ID":"b8cd7c76-c03d-4f9a-9e3e-d982c39d92c2","Type":"ContainerStarted","Data":"ce864b7e2c156ad54c594426ecafdaa076482d9391d35380515ec6860da3c621"} Dec 05 19:17:47 crc kubenswrapper[4828]: I1205 19:17:47.830986 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-cccfd6bcb-v7d55" Dec 05 19:17:47 crc kubenswrapper[4828]: I1205 19:17:47.853286 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-8f476869-jcl7n" podStartSLOduration=1.5030297479999999 podStartE2EDuration="6.85326786s" podCreationTimestamp="2025-12-05 19:17:41 +0000 UTC" firstStartedPulling="2025-12-05 19:17:41.748838481 +0000 UTC m=+839.644060787" lastFinishedPulling="2025-12-05 19:17:47.099076593 +0000 UTC m=+844.994298899" observedRunningTime="2025-12-05 19:17:47.850115245 +0000 UTC m=+845.745337561" watchObservedRunningTime="2025-12-05 19:17:47.85326786 +0000 UTC m=+845.748490166" Dec 05 19:17:47 crc kubenswrapper[4828]: I1205 19:17:47.876223 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-cccfd6bcb-v7d55" podStartSLOduration=2.441723936 podStartE2EDuration="7.876203268s" podCreationTimestamp="2025-12-05 19:17:40 +0000 UTC" firstStartedPulling="2025-12-05 19:17:41.651091417 +0000 UTC m=+839.546313713" lastFinishedPulling="2025-12-05 19:17:47.085570739 +0000 UTC m=+844.980793045" observedRunningTime="2025-12-05 19:17:47.871138821 +0000 UTC m=+845.766361127" watchObservedRunningTime="2025-12-05 19:17:47.876203268 +0000 UTC m=+845.771425574" Dec 05 19:18:01 crc kubenswrapper[4828]: I1205 19:18:01.486221 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-8f476869-jcl7n" Dec 05 19:18:21 crc kubenswrapper[4828]: I1205 19:18:21.085292 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-cccfd6bcb-v7d55" Dec 05 19:18:21 crc kubenswrapper[4828]: I1205 19:18:21.857646 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-p2wtq"] Dec 05 19:18:21 crc kubenswrapper[4828]: I1205 19:18:21.860317 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-p2wtq" Dec 05 19:18:21 crc kubenswrapper[4828]: I1205 19:18:21.863667 4828 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 05 19:18:21 crc kubenswrapper[4828]: I1205 19:18:21.863810 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 05 19:18:21 crc kubenswrapper[4828]: I1205 19:18:21.865009 4828 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-kf55m" Dec 05 19:18:21 crc kubenswrapper[4828]: I1205 19:18:21.875418 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-w8vp2"] Dec 05 19:18:21 crc kubenswrapper[4828]: I1205 19:18:21.876446 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-w8vp2" Dec 05 19:18:21 crc kubenswrapper[4828]: I1205 19:18:21.878357 4828 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 05 19:18:21 crc kubenswrapper[4828]: I1205 19:18:21.884146 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-w8vp2"] Dec 05 19:18:21 crc kubenswrapper[4828]: I1205 19:18:21.928549 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llh4h\" (UniqueName: \"kubernetes.io/projected/c24da36a-69fc-4337-87e0-4a1cc34090ff-kube-api-access-llh4h\") pod \"frr-k8s-p2wtq\" (UID: \"c24da36a-69fc-4337-87e0-4a1cc34090ff\") " pod="metallb-system/frr-k8s-p2wtq" Dec 05 19:18:21 crc kubenswrapper[4828]: I1205 19:18:21.928616 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/67e4c769-0905-4c8f-8fc0-2488346fe188-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-w8vp2\" (UID: \"67e4c769-0905-4c8f-8fc0-2488346fe188\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-w8vp2" Dec 05 19:18:21 crc kubenswrapper[4828]: I1205 19:18:21.928653 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nz7q\" (UniqueName: \"kubernetes.io/projected/67e4c769-0905-4c8f-8fc0-2488346fe188-kube-api-access-6nz7q\") pod \"frr-k8s-webhook-server-7fcb986d4-w8vp2\" (UID: \"67e4c769-0905-4c8f-8fc0-2488346fe188\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-w8vp2" Dec 05 19:18:21 crc kubenswrapper[4828]: I1205 19:18:21.928705 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c24da36a-69fc-4337-87e0-4a1cc34090ff-metrics\") pod \"frr-k8s-p2wtq\" (UID: \"c24da36a-69fc-4337-87e0-4a1cc34090ff\") " pod="metallb-system/frr-k8s-p2wtq" Dec 05 19:18:21 crc kubenswrapper[4828]: I1205 19:18:21.928789 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c24da36a-69fc-4337-87e0-4a1cc34090ff-metrics-certs\") pod \"frr-k8s-p2wtq\" (UID: \"c24da36a-69fc-4337-87e0-4a1cc34090ff\") " pod="metallb-system/frr-k8s-p2wtq" Dec 05 19:18:21 crc kubenswrapper[4828]: I1205 19:18:21.928858 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c24da36a-69fc-4337-87e0-4a1cc34090ff-reloader\") pod \"frr-k8s-p2wtq\" (UID: \"c24da36a-69fc-4337-87e0-4a1cc34090ff\") " pod="metallb-system/frr-k8s-p2wtq" Dec 05 19:18:21 crc kubenswrapper[4828]: I1205 19:18:21.928942 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c24da36a-69fc-4337-87e0-4a1cc34090ff-frr-startup\") pod \"frr-k8s-p2wtq\" (UID: \"c24da36a-69fc-4337-87e0-4a1cc34090ff\") " pod="metallb-system/frr-k8s-p2wtq" Dec 05 19:18:21 crc kubenswrapper[4828]: I1205 19:18:21.928978 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c24da36a-69fc-4337-87e0-4a1cc34090ff-frr-sockets\") pod \"frr-k8s-p2wtq\" (UID: \"c24da36a-69fc-4337-87e0-4a1cc34090ff\") " pod="metallb-system/frr-k8s-p2wtq" Dec 05 19:18:21 crc kubenswrapper[4828]: I1205 19:18:21.929001 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c24da36a-69fc-4337-87e0-4a1cc34090ff-frr-conf\") pod \"frr-k8s-p2wtq\" (UID: \"c24da36a-69fc-4337-87e0-4a1cc34090ff\") " pod="metallb-system/frr-k8s-p2wtq" Dec 05 19:18:21 crc kubenswrapper[4828]: I1205 19:18:21.968666 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-gqq4l"] Dec 05 19:18:21 crc kubenswrapper[4828]: I1205 19:18:21.969712 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-gqq4l" Dec 05 19:18:21 crc kubenswrapper[4828]: I1205 19:18:21.972484 4828 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 05 19:18:21 crc kubenswrapper[4828]: I1205 19:18:21.972484 4828 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 05 19:18:21 crc kubenswrapper[4828]: I1205 19:18:21.972523 4828 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-rzw6n" Dec 05 19:18:21 crc kubenswrapper[4828]: I1205 19:18:21.974295 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 05 19:18:21 crc kubenswrapper[4828]: I1205 19:18:21.992961 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-t5wnc"] Dec 05 19:18:21 crc kubenswrapper[4828]: I1205 19:18:21.994040 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-t5wnc" Dec 05 19:18:21 crc kubenswrapper[4828]: I1205 19:18:21.997178 4828 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.012359 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-t5wnc"] Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.030077 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35cb1f63-dbf8-4451-adff-4b35840e5498-metrics-certs\") pod \"controller-f8648f98b-t5wnc\" (UID: \"35cb1f63-dbf8-4451-adff-4b35840e5498\") " pod="metallb-system/controller-f8648f98b-t5wnc" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.030311 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8a121072-6f44-4a42-b9b1-a54d8d04fea4-metallb-excludel2\") pod \"speaker-gqq4l\" (UID: \"8a121072-6f44-4a42-b9b1-a54d8d04fea4\") " pod="metallb-system/speaker-gqq4l" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.030395 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llh4h\" (UniqueName: \"kubernetes.io/projected/c24da36a-69fc-4337-87e0-4a1cc34090ff-kube-api-access-llh4h\") pod \"frr-k8s-p2wtq\" (UID: \"c24da36a-69fc-4337-87e0-4a1cc34090ff\") " pod="metallb-system/frr-k8s-p2wtq" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.030502 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/67e4c769-0905-4c8f-8fc0-2488346fe188-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-w8vp2\" (UID: \"67e4c769-0905-4c8f-8fc0-2488346fe188\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-w8vp2" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.030579 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nz7q\" (UniqueName: \"kubernetes.io/projected/67e4c769-0905-4c8f-8fc0-2488346fe188-kube-api-access-6nz7q\") pod \"frr-k8s-webhook-server-7fcb986d4-w8vp2\" (UID: \"67e4c769-0905-4c8f-8fc0-2488346fe188\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-w8vp2" Dec 05 19:18:22 crc kubenswrapper[4828]: E1205 19:18:22.030651 4828 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Dec 05 19:18:22 crc kubenswrapper[4828]: E1205 19:18:22.030713 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67e4c769-0905-4c8f-8fc0-2488346fe188-cert podName:67e4c769-0905-4c8f-8fc0-2488346fe188 nodeName:}" failed. No retries permitted until 2025-12-05 19:18:22.530697665 +0000 UTC m=+880.425919971 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/67e4c769-0905-4c8f-8fc0-2488346fe188-cert") pod "frr-k8s-webhook-server-7fcb986d4-w8vp2" (UID: "67e4c769-0905-4c8f-8fc0-2488346fe188") : secret "frr-k8s-webhook-server-cert" not found Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.030662 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c24da36a-69fc-4337-87e0-4a1cc34090ff-metrics\") pod \"frr-k8s-p2wtq\" (UID: \"c24da36a-69fc-4337-87e0-4a1cc34090ff\") " pod="metallb-system/frr-k8s-p2wtq" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.030836 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7xdv\" (UniqueName: \"kubernetes.io/projected/8a121072-6f44-4a42-b9b1-a54d8d04fea4-kube-api-access-n7xdv\") pod \"speaker-gqq4l\" (UID: \"8a121072-6f44-4a42-b9b1-a54d8d04fea4\") " pod="metallb-system/speaker-gqq4l" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.030892 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrcbf\" (UniqueName: \"kubernetes.io/projected/35cb1f63-dbf8-4451-adff-4b35840e5498-kube-api-access-nrcbf\") pod \"controller-f8648f98b-t5wnc\" (UID: \"35cb1f63-dbf8-4451-adff-4b35840e5498\") " pod="metallb-system/controller-f8648f98b-t5wnc" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.030925 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c24da36a-69fc-4337-87e0-4a1cc34090ff-metrics-certs\") pod \"frr-k8s-p2wtq\" (UID: \"c24da36a-69fc-4337-87e0-4a1cc34090ff\") " pod="metallb-system/frr-k8s-p2wtq" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.030948 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c24da36a-69fc-4337-87e0-4a1cc34090ff-reloader\") pod \"frr-k8s-p2wtq\" (UID: \"c24da36a-69fc-4337-87e0-4a1cc34090ff\") " pod="metallb-system/frr-k8s-p2wtq" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.030971 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/35cb1f63-dbf8-4451-adff-4b35840e5498-cert\") pod \"controller-f8648f98b-t5wnc\" (UID: \"35cb1f63-dbf8-4451-adff-4b35840e5498\") " pod="metallb-system/controller-f8648f98b-t5wnc" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.031005 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c24da36a-69fc-4337-87e0-4a1cc34090ff-frr-startup\") pod \"frr-k8s-p2wtq\" (UID: \"c24da36a-69fc-4337-87e0-4a1cc34090ff\") " pod="metallb-system/frr-k8s-p2wtq" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.031032 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c24da36a-69fc-4337-87e0-4a1cc34090ff-frr-sockets\") pod \"frr-k8s-p2wtq\" (UID: \"c24da36a-69fc-4337-87e0-4a1cc34090ff\") " pod="metallb-system/frr-k8s-p2wtq" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.031079 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c24da36a-69fc-4337-87e0-4a1cc34090ff-frr-conf\") pod \"frr-k8s-p2wtq\" (UID: \"c24da36a-69fc-4337-87e0-4a1cc34090ff\") " pod="metallb-system/frr-k8s-p2wtq" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.031105 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8a121072-6f44-4a42-b9b1-a54d8d04fea4-memberlist\") pod \"speaker-gqq4l\" (UID: \"8a121072-6f44-4a42-b9b1-a54d8d04fea4\") " pod="metallb-system/speaker-gqq4l" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.031122 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8a121072-6f44-4a42-b9b1-a54d8d04fea4-metrics-certs\") pod \"speaker-gqq4l\" (UID: \"8a121072-6f44-4a42-b9b1-a54d8d04fea4\") " pod="metallb-system/speaker-gqq4l" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.031448 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c24da36a-69fc-4337-87e0-4a1cc34090ff-reloader\") pod \"frr-k8s-p2wtq\" (UID: \"c24da36a-69fc-4337-87e0-4a1cc34090ff\") " pod="metallb-system/frr-k8s-p2wtq" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.031964 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c24da36a-69fc-4337-87e0-4a1cc34090ff-frr-sockets\") pod \"frr-k8s-p2wtq\" (UID: \"c24da36a-69fc-4337-87e0-4a1cc34090ff\") " pod="metallb-system/frr-k8s-p2wtq" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.032047 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c24da36a-69fc-4337-87e0-4a1cc34090ff-metrics\") pod \"frr-k8s-p2wtq\" (UID: \"c24da36a-69fc-4337-87e0-4a1cc34090ff\") " pod="metallb-system/frr-k8s-p2wtq" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.032050 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c24da36a-69fc-4337-87e0-4a1cc34090ff-frr-conf\") pod \"frr-k8s-p2wtq\" (UID: \"c24da36a-69fc-4337-87e0-4a1cc34090ff\") " pod="metallb-system/frr-k8s-p2wtq" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.032125 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c24da36a-69fc-4337-87e0-4a1cc34090ff-frr-startup\") pod \"frr-k8s-p2wtq\" (UID: \"c24da36a-69fc-4337-87e0-4a1cc34090ff\") " pod="metallb-system/frr-k8s-p2wtq" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.036687 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c24da36a-69fc-4337-87e0-4a1cc34090ff-metrics-certs\") pod \"frr-k8s-p2wtq\" (UID: \"c24da36a-69fc-4337-87e0-4a1cc34090ff\") " pod="metallb-system/frr-k8s-p2wtq" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.047376 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llh4h\" (UniqueName: \"kubernetes.io/projected/c24da36a-69fc-4337-87e0-4a1cc34090ff-kube-api-access-llh4h\") pod \"frr-k8s-p2wtq\" (UID: \"c24da36a-69fc-4337-87e0-4a1cc34090ff\") " pod="metallb-system/frr-k8s-p2wtq" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.049304 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nz7q\" (UniqueName: \"kubernetes.io/projected/67e4c769-0905-4c8f-8fc0-2488346fe188-kube-api-access-6nz7q\") pod \"frr-k8s-webhook-server-7fcb986d4-w8vp2\" (UID: \"67e4c769-0905-4c8f-8fc0-2488346fe188\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-w8vp2" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.131615 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35cb1f63-dbf8-4451-adff-4b35840e5498-metrics-certs\") pod \"controller-f8648f98b-t5wnc\" (UID: \"35cb1f63-dbf8-4451-adff-4b35840e5498\") " pod="metallb-system/controller-f8648f98b-t5wnc" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.131666 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8a121072-6f44-4a42-b9b1-a54d8d04fea4-metallb-excludel2\") pod \"speaker-gqq4l\" (UID: \"8a121072-6f44-4a42-b9b1-a54d8d04fea4\") " pod="metallb-system/speaker-gqq4l" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.131733 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7xdv\" (UniqueName: \"kubernetes.io/projected/8a121072-6f44-4a42-b9b1-a54d8d04fea4-kube-api-access-n7xdv\") pod \"speaker-gqq4l\" (UID: \"8a121072-6f44-4a42-b9b1-a54d8d04fea4\") " pod="metallb-system/speaker-gqq4l" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.131754 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrcbf\" (UniqueName: \"kubernetes.io/projected/35cb1f63-dbf8-4451-adff-4b35840e5498-kube-api-access-nrcbf\") pod \"controller-f8648f98b-t5wnc\" (UID: \"35cb1f63-dbf8-4451-adff-4b35840e5498\") " pod="metallb-system/controller-f8648f98b-t5wnc" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.131781 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/35cb1f63-dbf8-4451-adff-4b35840e5498-cert\") pod \"controller-f8648f98b-t5wnc\" (UID: \"35cb1f63-dbf8-4451-adff-4b35840e5498\") " pod="metallb-system/controller-f8648f98b-t5wnc" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.131838 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8a121072-6f44-4a42-b9b1-a54d8d04fea4-memberlist\") pod \"speaker-gqq4l\" (UID: \"8a121072-6f44-4a42-b9b1-a54d8d04fea4\") " pod="metallb-system/speaker-gqq4l" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.131868 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8a121072-6f44-4a42-b9b1-a54d8d04fea4-metrics-certs\") pod \"speaker-gqq4l\" (UID: \"8a121072-6f44-4a42-b9b1-a54d8d04fea4\") " pod="metallb-system/speaker-gqq4l" Dec 05 19:18:22 crc kubenswrapper[4828]: E1205 19:18:22.132373 4828 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 05 19:18:22 crc kubenswrapper[4828]: E1205 19:18:22.132476 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a121072-6f44-4a42-b9b1-a54d8d04fea4-memberlist podName:8a121072-6f44-4a42-b9b1-a54d8d04fea4 nodeName:}" failed. No retries permitted until 2025-12-05 19:18:22.632454497 +0000 UTC m=+880.527676903 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8a121072-6f44-4a42-b9b1-a54d8d04fea4-memberlist") pod "speaker-gqq4l" (UID: "8a121072-6f44-4a42-b9b1-a54d8d04fea4") : secret "metallb-memberlist" not found Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.132549 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8a121072-6f44-4a42-b9b1-a54d8d04fea4-metallb-excludel2\") pod \"speaker-gqq4l\" (UID: \"8a121072-6f44-4a42-b9b1-a54d8d04fea4\") " pod="metallb-system/speaker-gqq4l" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.135102 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35cb1f63-dbf8-4451-adff-4b35840e5498-metrics-certs\") pod \"controller-f8648f98b-t5wnc\" (UID: \"35cb1f63-dbf8-4451-adff-4b35840e5498\") " pod="metallb-system/controller-f8648f98b-t5wnc" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.135455 4828 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.136662 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8a121072-6f44-4a42-b9b1-a54d8d04fea4-metrics-certs\") pod \"speaker-gqq4l\" (UID: \"8a121072-6f44-4a42-b9b1-a54d8d04fea4\") " pod="metallb-system/speaker-gqq4l" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.145798 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/35cb1f63-dbf8-4451-adff-4b35840e5498-cert\") pod \"controller-f8648f98b-t5wnc\" (UID: \"35cb1f63-dbf8-4451-adff-4b35840e5498\") " pod="metallb-system/controller-f8648f98b-t5wnc" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.153969 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrcbf\" (UniqueName: \"kubernetes.io/projected/35cb1f63-dbf8-4451-adff-4b35840e5498-kube-api-access-nrcbf\") pod \"controller-f8648f98b-t5wnc\" (UID: \"35cb1f63-dbf8-4451-adff-4b35840e5498\") " pod="metallb-system/controller-f8648f98b-t5wnc" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.166496 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7xdv\" (UniqueName: \"kubernetes.io/projected/8a121072-6f44-4a42-b9b1-a54d8d04fea4-kube-api-access-n7xdv\") pod \"speaker-gqq4l\" (UID: \"8a121072-6f44-4a42-b9b1-a54d8d04fea4\") " pod="metallb-system/speaker-gqq4l" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.176283 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-p2wtq" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.306275 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-t5wnc" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.538009 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/67e4c769-0905-4c8f-8fc0-2488346fe188-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-w8vp2\" (UID: \"67e4c769-0905-4c8f-8fc0-2488346fe188\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-w8vp2" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.547966 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/67e4c769-0905-4c8f-8fc0-2488346fe188-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-w8vp2\" (UID: \"67e4c769-0905-4c8f-8fc0-2488346fe188\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-w8vp2" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.639654 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8a121072-6f44-4a42-b9b1-a54d8d04fea4-memberlist\") pod \"speaker-gqq4l\" (UID: \"8a121072-6f44-4a42-b9b1-a54d8d04fea4\") " pod="metallb-system/speaker-gqq4l" Dec 05 19:18:22 crc kubenswrapper[4828]: E1205 19:18:22.640264 4828 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 05 19:18:22 crc kubenswrapper[4828]: E1205 19:18:22.640430 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a121072-6f44-4a42-b9b1-a54d8d04fea4-memberlist podName:8a121072-6f44-4a42-b9b1-a54d8d04fea4 nodeName:}" failed. No retries permitted until 2025-12-05 19:18:23.64039273 +0000 UTC m=+881.535615086 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8a121072-6f44-4a42-b9b1-a54d8d04fea4-memberlist") pod "speaker-gqq4l" (UID: "8a121072-6f44-4a42-b9b1-a54d8d04fea4") : secret "metallb-memberlist" not found Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.726626 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-t5wnc"] Dec 05 19:18:22 crc kubenswrapper[4828]: W1205 19:18:22.733901 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35cb1f63_dbf8_4451_adff_4b35840e5498.slice/crio-eb598f46cd0c053c966580eea71c98ed9d3473c95a65a89845f24342af599d60 WatchSource:0}: Error finding container eb598f46cd0c053c966580eea71c98ed9d3473c95a65a89845f24342af599d60: Status 404 returned error can't find the container with id eb598f46cd0c053c966580eea71c98ed9d3473c95a65a89845f24342af599d60 Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.788575 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-w8vp2" Dec 05 19:18:22 crc kubenswrapper[4828]: I1205 19:18:22.992687 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-w8vp2"] Dec 05 19:18:23 crc kubenswrapper[4828]: I1205 19:18:23.052651 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-w8vp2" event={"ID":"67e4c769-0905-4c8f-8fc0-2488346fe188","Type":"ContainerStarted","Data":"3359c06cdd46ce69efe7946480b1b97429f3ce9d223d187caf95726bf2b9fefa"} Dec 05 19:18:23 crc kubenswrapper[4828]: I1205 19:18:23.054539 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-t5wnc" event={"ID":"35cb1f63-dbf8-4451-adff-4b35840e5498","Type":"ContainerStarted","Data":"99f106d4ceb1daf4da45ae5c17ce3f4a51d6af4f8e2b449d8b15c7c64a2aaa73"} Dec 05 19:18:23 crc kubenswrapper[4828]: I1205 19:18:23.054568 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-t5wnc" event={"ID":"35cb1f63-dbf8-4451-adff-4b35840e5498","Type":"ContainerStarted","Data":"f163e9c7795f5a01d420a503ffe4ce06851757814b758532650d970493f2ecbd"} Dec 05 19:18:23 crc kubenswrapper[4828]: I1205 19:18:23.054581 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-t5wnc" event={"ID":"35cb1f63-dbf8-4451-adff-4b35840e5498","Type":"ContainerStarted","Data":"eb598f46cd0c053c966580eea71c98ed9d3473c95a65a89845f24342af599d60"} Dec 05 19:18:23 crc kubenswrapper[4828]: I1205 19:18:23.054680 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-t5wnc" Dec 05 19:18:23 crc kubenswrapper[4828]: I1205 19:18:23.066406 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-p2wtq" event={"ID":"c24da36a-69fc-4337-87e0-4a1cc34090ff","Type":"ContainerStarted","Data":"5d9d720b369c40cc68f53a79e3334c03922a778e9cbd4fbae7ddeb238984284e"} Dec 05 19:18:23 crc kubenswrapper[4828]: I1205 19:18:23.084354 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-t5wnc" podStartSLOduration=2.08433133 podStartE2EDuration="2.08433133s" podCreationTimestamp="2025-12-05 19:18:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:18:23.080161187 +0000 UTC m=+880.975383493" watchObservedRunningTime="2025-12-05 19:18:23.08433133 +0000 UTC m=+880.979553636" Dec 05 19:18:23 crc kubenswrapper[4828]: I1205 19:18:23.650769 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8a121072-6f44-4a42-b9b1-a54d8d04fea4-memberlist\") pod \"speaker-gqq4l\" (UID: \"8a121072-6f44-4a42-b9b1-a54d8d04fea4\") " pod="metallb-system/speaker-gqq4l" Dec 05 19:18:23 crc kubenswrapper[4828]: I1205 19:18:23.656449 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8a121072-6f44-4a42-b9b1-a54d8d04fea4-memberlist\") pod \"speaker-gqq4l\" (UID: \"8a121072-6f44-4a42-b9b1-a54d8d04fea4\") " pod="metallb-system/speaker-gqq4l" Dec 05 19:18:23 crc kubenswrapper[4828]: I1205 19:18:23.783802 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-gqq4l" Dec 05 19:18:23 crc kubenswrapper[4828]: W1205 19:18:23.831402 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a121072_6f44_4a42_b9b1_a54d8d04fea4.slice/crio-79a810de96949297971898c49e401cab3d074e59680e2ad042f293bc5d2e71e5 WatchSource:0}: Error finding container 79a810de96949297971898c49e401cab3d074e59680e2ad042f293bc5d2e71e5: Status 404 returned error can't find the container with id 79a810de96949297971898c49e401cab3d074e59680e2ad042f293bc5d2e71e5 Dec 05 19:18:24 crc kubenswrapper[4828]: I1205 19:18:24.083049 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gqq4l" event={"ID":"8a121072-6f44-4a42-b9b1-a54d8d04fea4","Type":"ContainerStarted","Data":"79a810de96949297971898c49e401cab3d074e59680e2ad042f293bc5d2e71e5"} Dec 05 19:18:25 crc kubenswrapper[4828]: I1205 19:18:25.095975 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gqq4l" event={"ID":"8a121072-6f44-4a42-b9b1-a54d8d04fea4","Type":"ContainerStarted","Data":"63ba507c71ea9d1d37159c43bed24be08373a1ae224d9d747eac52fe42b31e46"} Dec 05 19:18:25 crc kubenswrapper[4828]: I1205 19:18:25.096221 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gqq4l" event={"ID":"8a121072-6f44-4a42-b9b1-a54d8d04fea4","Type":"ContainerStarted","Data":"f964d741d9fd769d3f906ed43029dca0dee5a40413a09dea5fce41b4e667d805"} Dec 05 19:18:25 crc kubenswrapper[4828]: I1205 19:18:25.096249 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-gqq4l" Dec 05 19:18:25 crc kubenswrapper[4828]: I1205 19:18:25.120620 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-gqq4l" podStartSLOduration=4.120601155 podStartE2EDuration="4.120601155s" podCreationTimestamp="2025-12-05 19:18:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:18:25.11522033 +0000 UTC m=+883.010442646" watchObservedRunningTime="2025-12-05 19:18:25.120601155 +0000 UTC m=+883.015823471" Dec 05 19:18:30 crc kubenswrapper[4828]: I1205 19:18:30.235392 4828 generic.go:334] "Generic (PLEG): container finished" podID="c24da36a-69fc-4337-87e0-4a1cc34090ff" containerID="f844422fe3adeccae9007060e144404fa0f838b1d832ade1ae884f30ec773df5" exitCode=0 Dec 05 19:18:30 crc kubenswrapper[4828]: I1205 19:18:30.235523 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-p2wtq" event={"ID":"c24da36a-69fc-4337-87e0-4a1cc34090ff","Type":"ContainerDied","Data":"f844422fe3adeccae9007060e144404fa0f838b1d832ade1ae884f30ec773df5"} Dec 05 19:18:30 crc kubenswrapper[4828]: I1205 19:18:30.240039 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-w8vp2" event={"ID":"67e4c769-0905-4c8f-8fc0-2488346fe188","Type":"ContainerStarted","Data":"d2a750cb3d1f269cf23f127cfcafb2227df7c14ca254a634e69aa0eb1f762b81"} Dec 05 19:18:30 crc kubenswrapper[4828]: I1205 19:18:30.240296 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-w8vp2" Dec 05 19:18:31 crc kubenswrapper[4828]: I1205 19:18:31.250218 4828 generic.go:334] "Generic (PLEG): container finished" podID="c24da36a-69fc-4337-87e0-4a1cc34090ff" containerID="208adfbbc0f95956276ea0104bb9f24e429794c2a249fde808ccbf050a6ee44a" exitCode=0 Dec 05 19:18:31 crc kubenswrapper[4828]: I1205 19:18:31.250666 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-p2wtq" event={"ID":"c24da36a-69fc-4337-87e0-4a1cc34090ff","Type":"ContainerDied","Data":"208adfbbc0f95956276ea0104bb9f24e429794c2a249fde808ccbf050a6ee44a"} Dec 05 19:18:31 crc kubenswrapper[4828]: I1205 19:18:31.288795 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-w8vp2" podStartSLOduration=3.442079676 podStartE2EDuration="10.288766671s" podCreationTimestamp="2025-12-05 19:18:21 +0000 UTC" firstStartedPulling="2025-12-05 19:18:22.996525584 +0000 UTC m=+880.891747890" lastFinishedPulling="2025-12-05 19:18:29.843212569 +0000 UTC m=+887.738434885" observedRunningTime="2025-12-05 19:18:30.328099861 +0000 UTC m=+888.223322167" watchObservedRunningTime="2025-12-05 19:18:31.288766671 +0000 UTC m=+889.183989017" Dec 05 19:18:32 crc kubenswrapper[4828]: I1205 19:18:32.259018 4828 generic.go:334] "Generic (PLEG): container finished" podID="c24da36a-69fc-4337-87e0-4a1cc34090ff" containerID="3db386e579daeb86eba5ee987facc157301399aa197080fc9b9cfe2265fc4994" exitCode=0 Dec 05 19:18:32 crc kubenswrapper[4828]: I1205 19:18:32.259090 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-p2wtq" event={"ID":"c24da36a-69fc-4337-87e0-4a1cc34090ff","Type":"ContainerDied","Data":"3db386e579daeb86eba5ee987facc157301399aa197080fc9b9cfe2265fc4994"} Dec 05 19:18:32 crc kubenswrapper[4828]: I1205 19:18:32.317148 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-t5wnc" Dec 05 19:18:33 crc kubenswrapper[4828]: I1205 19:18:33.275042 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-p2wtq" event={"ID":"c24da36a-69fc-4337-87e0-4a1cc34090ff","Type":"ContainerStarted","Data":"a4fd905c996dee13897457fc4cbdb958774656a9be6c53597a6e1ecb66c5fe4f"} Dec 05 19:18:33 crc kubenswrapper[4828]: I1205 19:18:33.275291 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-p2wtq" event={"ID":"c24da36a-69fc-4337-87e0-4a1cc34090ff","Type":"ContainerStarted","Data":"2da16a1a4f631b27b22a86cae577901b0707fe01312dc7669d89fc126f638181"} Dec 05 19:18:33 crc kubenswrapper[4828]: I1205 19:18:33.275302 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-p2wtq" event={"ID":"c24da36a-69fc-4337-87e0-4a1cc34090ff","Type":"ContainerStarted","Data":"7fa4270d489d399b454a1e14518fcf7a7c4ba9e1416b9ef0fbb6559b68d60916"} Dec 05 19:18:33 crc kubenswrapper[4828]: I1205 19:18:33.275310 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-p2wtq" event={"ID":"c24da36a-69fc-4337-87e0-4a1cc34090ff","Type":"ContainerStarted","Data":"c66690b92fef599704ca015cbebf8f815d12554966612bc94111a17ccf09f98d"} Dec 05 19:18:33 crc kubenswrapper[4828]: I1205 19:18:33.275318 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-p2wtq" event={"ID":"c24da36a-69fc-4337-87e0-4a1cc34090ff","Type":"ContainerStarted","Data":"1507276c80d834e07329caf598334bc46914fa0e922ed15545c00fccc8fd07b7"} Dec 05 19:18:34 crc kubenswrapper[4828]: I1205 19:18:34.287656 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-p2wtq" event={"ID":"c24da36a-69fc-4337-87e0-4a1cc34090ff","Type":"ContainerStarted","Data":"671e11ae03b11e840b615e3551e8d767388e6317845582d54c1aeced965b515e"} Dec 05 19:18:34 crc kubenswrapper[4828]: I1205 19:18:34.289161 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-p2wtq" Dec 05 19:18:34 crc kubenswrapper[4828]: I1205 19:18:34.316596 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-p2wtq" podStartSLOduration=5.828755553 podStartE2EDuration="13.316570258s" podCreationTimestamp="2025-12-05 19:18:21 +0000 UTC" firstStartedPulling="2025-12-05 19:18:22.321193962 +0000 UTC m=+880.216416268" lastFinishedPulling="2025-12-05 19:18:29.809008627 +0000 UTC m=+887.704230973" observedRunningTime="2025-12-05 19:18:34.309802365 +0000 UTC m=+892.205024671" watchObservedRunningTime="2025-12-05 19:18:34.316570258 +0000 UTC m=+892.211792594" Dec 05 19:18:37 crc kubenswrapper[4828]: I1205 19:18:37.177257 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-p2wtq" Dec 05 19:18:37 crc kubenswrapper[4828]: I1205 19:18:37.231936 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-p2wtq" Dec 05 19:18:42 crc kubenswrapper[4828]: I1205 19:18:42.180738 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-p2wtq" Dec 05 19:18:42 crc kubenswrapper[4828]: I1205 19:18:42.802959 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-w8vp2" Dec 05 19:18:43 crc kubenswrapper[4828]: I1205 19:18:43.792053 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-gqq4l" Dec 05 19:18:46 crc kubenswrapper[4828]: I1205 19:18:46.802674 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-6d4mx"] Dec 05 19:18:46 crc kubenswrapper[4828]: I1205 19:18:46.804239 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6d4mx" Dec 05 19:18:46 crc kubenswrapper[4828]: I1205 19:18:46.807730 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 05 19:18:46 crc kubenswrapper[4828]: I1205 19:18:46.810100 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 05 19:18:46 crc kubenswrapper[4828]: I1205 19:18:46.823579 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-t4ktb" Dec 05 19:18:46 crc kubenswrapper[4828]: I1205 19:18:46.836963 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6d4mx"] Dec 05 19:18:46 crc kubenswrapper[4828]: I1205 19:18:46.944735 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjxvk\" (UniqueName: \"kubernetes.io/projected/47e2919a-e707-4ee2-be39-0783b4e6362f-kube-api-access-pjxvk\") pod \"openstack-operator-index-6d4mx\" (UID: \"47e2919a-e707-4ee2-be39-0783b4e6362f\") " pod="openstack-operators/openstack-operator-index-6d4mx" Dec 05 19:18:47 crc kubenswrapper[4828]: I1205 19:18:47.046715 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjxvk\" (UniqueName: \"kubernetes.io/projected/47e2919a-e707-4ee2-be39-0783b4e6362f-kube-api-access-pjxvk\") pod \"openstack-operator-index-6d4mx\" (UID: \"47e2919a-e707-4ee2-be39-0783b4e6362f\") " pod="openstack-operators/openstack-operator-index-6d4mx" Dec 05 19:18:47 crc kubenswrapper[4828]: I1205 19:18:47.065700 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjxvk\" (UniqueName: \"kubernetes.io/projected/47e2919a-e707-4ee2-be39-0783b4e6362f-kube-api-access-pjxvk\") pod \"openstack-operator-index-6d4mx\" (UID: \"47e2919a-e707-4ee2-be39-0783b4e6362f\") " pod="openstack-operators/openstack-operator-index-6d4mx" Dec 05 19:18:47 crc kubenswrapper[4828]: I1205 19:18:47.143005 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6d4mx" Dec 05 19:18:47 crc kubenswrapper[4828]: I1205 19:18:47.675912 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6d4mx"] Dec 05 19:18:48 crc kubenswrapper[4828]: I1205 19:18:48.404742 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6d4mx" event={"ID":"47e2919a-e707-4ee2-be39-0783b4e6362f","Type":"ContainerStarted","Data":"6d66118ea34ddff1d30738ab1cf5d198c456b897b2c97b8965e51fb4113b8c9a"} Dec 05 19:18:50 crc kubenswrapper[4828]: I1205 19:18:50.157435 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-6d4mx"] Dec 05 19:18:50 crc kubenswrapper[4828]: I1205 19:18:50.422556 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6d4mx" event={"ID":"47e2919a-e707-4ee2-be39-0783b4e6362f","Type":"ContainerStarted","Data":"b0f7abe343e9a89c295e6c585657a8514c8765f769bfc9a5edae3018ce338392"} Dec 05 19:18:50 crc kubenswrapper[4828]: I1205 19:18:50.443874 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-6d4mx" podStartSLOduration=2.300487635 podStartE2EDuration="4.443809194s" podCreationTimestamp="2025-12-05 19:18:46 +0000 UTC" firstStartedPulling="2025-12-05 19:18:47.67928838 +0000 UTC m=+905.574510686" lastFinishedPulling="2025-12-05 19:18:49.822609929 +0000 UTC m=+907.717832245" observedRunningTime="2025-12-05 19:18:50.441401469 +0000 UTC m=+908.336623815" watchObservedRunningTime="2025-12-05 19:18:50.443809194 +0000 UTC m=+908.339031510" Dec 05 19:18:50 crc kubenswrapper[4828]: I1205 19:18:50.767913 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-mn9b4"] Dec 05 19:18:50 crc kubenswrapper[4828]: I1205 19:18:50.769092 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mn9b4" Dec 05 19:18:50 crc kubenswrapper[4828]: I1205 19:18:50.786962 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mn9b4"] Dec 05 19:18:50 crc kubenswrapper[4828]: I1205 19:18:50.901608 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc7q6\" (UniqueName: \"kubernetes.io/projected/850c5dc4-1658-4c59-96eb-999fb7392164-kube-api-access-lc7q6\") pod \"openstack-operator-index-mn9b4\" (UID: \"850c5dc4-1658-4c59-96eb-999fb7392164\") " pod="openstack-operators/openstack-operator-index-mn9b4" Dec 05 19:18:51 crc kubenswrapper[4828]: I1205 19:18:51.002746 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc7q6\" (UniqueName: \"kubernetes.io/projected/850c5dc4-1658-4c59-96eb-999fb7392164-kube-api-access-lc7q6\") pod \"openstack-operator-index-mn9b4\" (UID: \"850c5dc4-1658-4c59-96eb-999fb7392164\") " pod="openstack-operators/openstack-operator-index-mn9b4" Dec 05 19:18:51 crc kubenswrapper[4828]: I1205 19:18:51.027740 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc7q6\" (UniqueName: \"kubernetes.io/projected/850c5dc4-1658-4c59-96eb-999fb7392164-kube-api-access-lc7q6\") pod \"openstack-operator-index-mn9b4\" (UID: \"850c5dc4-1658-4c59-96eb-999fb7392164\") " pod="openstack-operators/openstack-operator-index-mn9b4" Dec 05 19:18:51 crc kubenswrapper[4828]: I1205 19:18:51.105770 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mn9b4" Dec 05 19:18:51 crc kubenswrapper[4828]: I1205 19:18:51.428267 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-6d4mx" podUID="47e2919a-e707-4ee2-be39-0783b4e6362f" containerName="registry-server" containerID="cri-o://b0f7abe343e9a89c295e6c585657a8514c8765f769bfc9a5edae3018ce338392" gracePeriod=2 Dec 05 19:18:51 crc kubenswrapper[4828]: I1205 19:18:51.624155 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mn9b4"] Dec 05 19:18:51 crc kubenswrapper[4828]: W1205 19:18:51.653603 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod850c5dc4_1658_4c59_96eb_999fb7392164.slice/crio-d797cfbb889263af48893e0db871584532c19706c0252dcd32da8324ce38d70d WatchSource:0}: Error finding container d797cfbb889263af48893e0db871584532c19706c0252dcd32da8324ce38d70d: Status 404 returned error can't find the container with id d797cfbb889263af48893e0db871584532c19706c0252dcd32da8324ce38d70d Dec 05 19:18:51 crc kubenswrapper[4828]: I1205 19:18:51.867524 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6d4mx" Dec 05 19:18:51 crc kubenswrapper[4828]: I1205 19:18:51.918845 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjxvk\" (UniqueName: \"kubernetes.io/projected/47e2919a-e707-4ee2-be39-0783b4e6362f-kube-api-access-pjxvk\") pod \"47e2919a-e707-4ee2-be39-0783b4e6362f\" (UID: \"47e2919a-e707-4ee2-be39-0783b4e6362f\") " Dec 05 19:18:51 crc kubenswrapper[4828]: I1205 19:18:51.927161 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47e2919a-e707-4ee2-be39-0783b4e6362f-kube-api-access-pjxvk" (OuterVolumeSpecName: "kube-api-access-pjxvk") pod "47e2919a-e707-4ee2-be39-0783b4e6362f" (UID: "47e2919a-e707-4ee2-be39-0783b4e6362f"). InnerVolumeSpecName "kube-api-access-pjxvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:18:52 crc kubenswrapper[4828]: I1205 19:18:52.020148 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjxvk\" (UniqueName: \"kubernetes.io/projected/47e2919a-e707-4ee2-be39-0783b4e6362f-kube-api-access-pjxvk\") on node \"crc\" DevicePath \"\"" Dec 05 19:18:52 crc kubenswrapper[4828]: I1205 19:18:52.440065 4828 generic.go:334] "Generic (PLEG): container finished" podID="47e2919a-e707-4ee2-be39-0783b4e6362f" containerID="b0f7abe343e9a89c295e6c585657a8514c8765f769bfc9a5edae3018ce338392" exitCode=0 Dec 05 19:18:52 crc kubenswrapper[4828]: I1205 19:18:52.440185 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6d4mx" event={"ID":"47e2919a-e707-4ee2-be39-0783b4e6362f","Type":"ContainerDied","Data":"b0f7abe343e9a89c295e6c585657a8514c8765f769bfc9a5edae3018ce338392"} Dec 05 19:18:52 crc kubenswrapper[4828]: I1205 19:18:52.440518 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6d4mx" event={"ID":"47e2919a-e707-4ee2-be39-0783b4e6362f","Type":"ContainerDied","Data":"6d66118ea34ddff1d30738ab1cf5d198c456b897b2c97b8965e51fb4113b8c9a"} Dec 05 19:18:52 crc kubenswrapper[4828]: I1205 19:18:52.440208 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6d4mx" Dec 05 19:18:52 crc kubenswrapper[4828]: I1205 19:18:52.440573 4828 scope.go:117] "RemoveContainer" containerID="b0f7abe343e9a89c295e6c585657a8514c8765f769bfc9a5edae3018ce338392" Dec 05 19:18:52 crc kubenswrapper[4828]: I1205 19:18:52.443471 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mn9b4" event={"ID":"850c5dc4-1658-4c59-96eb-999fb7392164","Type":"ContainerStarted","Data":"8439ff488e621c4a40129f4e6e7043de1443266192edc592b8be55d4588b24f8"} Dec 05 19:18:52 crc kubenswrapper[4828]: I1205 19:18:52.443511 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mn9b4" event={"ID":"850c5dc4-1658-4c59-96eb-999fb7392164","Type":"ContainerStarted","Data":"d797cfbb889263af48893e0db871584532c19706c0252dcd32da8324ce38d70d"} Dec 05 19:18:52 crc kubenswrapper[4828]: I1205 19:18:52.482200 4828 scope.go:117] "RemoveContainer" containerID="b0f7abe343e9a89c295e6c585657a8514c8765f769bfc9a5edae3018ce338392" Dec 05 19:18:52 crc kubenswrapper[4828]: E1205 19:18:52.482960 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0f7abe343e9a89c295e6c585657a8514c8765f769bfc9a5edae3018ce338392\": container with ID starting with b0f7abe343e9a89c295e6c585657a8514c8765f769bfc9a5edae3018ce338392 not found: ID does not exist" containerID="b0f7abe343e9a89c295e6c585657a8514c8765f769bfc9a5edae3018ce338392" Dec 05 19:18:52 crc kubenswrapper[4828]: I1205 19:18:52.483063 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0f7abe343e9a89c295e6c585657a8514c8765f769bfc9a5edae3018ce338392"} err="failed to get container status \"b0f7abe343e9a89c295e6c585657a8514c8765f769bfc9a5edae3018ce338392\": rpc error: code = NotFound desc = could not find container \"b0f7abe343e9a89c295e6c585657a8514c8765f769bfc9a5edae3018ce338392\": container with ID starting with b0f7abe343e9a89c295e6c585657a8514c8765f769bfc9a5edae3018ce338392 not found: ID does not exist" Dec 05 19:18:52 crc kubenswrapper[4828]: I1205 19:18:52.492095 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-mn9b4" podStartSLOduration=2.413567517 podStartE2EDuration="2.492073413s" podCreationTimestamp="2025-12-05 19:18:50 +0000 UTC" firstStartedPulling="2025-12-05 19:18:51.662469013 +0000 UTC m=+909.557691359" lastFinishedPulling="2025-12-05 19:18:51.740974919 +0000 UTC m=+909.636197255" observedRunningTime="2025-12-05 19:18:52.480257995 +0000 UTC m=+910.375480341" watchObservedRunningTime="2025-12-05 19:18:52.492073413 +0000 UTC m=+910.387295749" Dec 05 19:18:52 crc kubenswrapper[4828]: I1205 19:18:52.515740 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-6d4mx"] Dec 05 19:18:52 crc kubenswrapper[4828]: I1205 19:18:52.523100 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-6d4mx"] Dec 05 19:18:54 crc kubenswrapper[4828]: I1205 19:18:54.453980 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47e2919a-e707-4ee2-be39-0783b4e6362f" path="/var/lib/kubelet/pods/47e2919a-e707-4ee2-be39-0783b4e6362f/volumes" Dec 05 19:19:01 crc kubenswrapper[4828]: I1205 19:19:01.106381 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-mn9b4" Dec 05 19:19:01 crc kubenswrapper[4828]: I1205 19:19:01.107093 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-mn9b4" Dec 05 19:19:01 crc kubenswrapper[4828]: I1205 19:19:01.137778 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-mn9b4" Dec 05 19:19:01 crc kubenswrapper[4828]: I1205 19:19:01.557641 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-mn9b4" Dec 05 19:19:05 crc kubenswrapper[4828]: I1205 19:19:05.260855 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:19:05 crc kubenswrapper[4828]: I1205 19:19:05.260940 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:19:07 crc kubenswrapper[4828]: I1205 19:19:07.757391 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97"] Dec 05 19:19:07 crc kubenswrapper[4828]: E1205 19:19:07.757661 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47e2919a-e707-4ee2-be39-0783b4e6362f" containerName="registry-server" Dec 05 19:19:07 crc kubenswrapper[4828]: I1205 19:19:07.757677 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="47e2919a-e707-4ee2-be39-0783b4e6362f" containerName="registry-server" Dec 05 19:19:07 crc kubenswrapper[4828]: I1205 19:19:07.757851 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="47e2919a-e707-4ee2-be39-0783b4e6362f" containerName="registry-server" Dec 05 19:19:07 crc kubenswrapper[4828]: I1205 19:19:07.758741 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97" Dec 05 19:19:07 crc kubenswrapper[4828]: I1205 19:19:07.761254 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-pzpq4" Dec 05 19:19:07 crc kubenswrapper[4828]: I1205 19:19:07.769304 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97"] Dec 05 19:19:07 crc kubenswrapper[4828]: I1205 19:19:07.947146 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1363220d-423b-45a5-a067-559b8a36f610-util\") pod \"5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97\" (UID: \"1363220d-423b-45a5-a067-559b8a36f610\") " pod="openstack-operators/5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97" Dec 05 19:19:07 crc kubenswrapper[4828]: I1205 19:19:07.947487 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g5rc\" (UniqueName: \"kubernetes.io/projected/1363220d-423b-45a5-a067-559b8a36f610-kube-api-access-4g5rc\") pod \"5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97\" (UID: \"1363220d-423b-45a5-a067-559b8a36f610\") " pod="openstack-operators/5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97" Dec 05 19:19:07 crc kubenswrapper[4828]: I1205 19:19:07.947603 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1363220d-423b-45a5-a067-559b8a36f610-bundle\") pod \"5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97\" (UID: \"1363220d-423b-45a5-a067-559b8a36f610\") " pod="openstack-operators/5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97" Dec 05 19:19:08 crc kubenswrapper[4828]: I1205 19:19:08.048777 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1363220d-423b-45a5-a067-559b8a36f610-util\") pod \"5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97\" (UID: \"1363220d-423b-45a5-a067-559b8a36f610\") " pod="openstack-operators/5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97" Dec 05 19:19:08 crc kubenswrapper[4828]: I1205 19:19:08.048854 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g5rc\" (UniqueName: \"kubernetes.io/projected/1363220d-423b-45a5-a067-559b8a36f610-kube-api-access-4g5rc\") pod \"5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97\" (UID: \"1363220d-423b-45a5-a067-559b8a36f610\") " pod="openstack-operators/5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97" Dec 05 19:19:08 crc kubenswrapper[4828]: I1205 19:19:08.048906 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1363220d-423b-45a5-a067-559b8a36f610-bundle\") pod \"5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97\" (UID: \"1363220d-423b-45a5-a067-559b8a36f610\") " pod="openstack-operators/5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97" Dec 05 19:19:08 crc kubenswrapper[4828]: I1205 19:19:08.049348 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1363220d-423b-45a5-a067-559b8a36f610-util\") pod \"5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97\" (UID: \"1363220d-423b-45a5-a067-559b8a36f610\") " pod="openstack-operators/5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97" Dec 05 19:19:08 crc kubenswrapper[4828]: I1205 19:19:08.049388 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1363220d-423b-45a5-a067-559b8a36f610-bundle\") pod \"5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97\" (UID: \"1363220d-423b-45a5-a067-559b8a36f610\") " pod="openstack-operators/5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97" Dec 05 19:19:08 crc kubenswrapper[4828]: I1205 19:19:08.068547 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g5rc\" (UniqueName: \"kubernetes.io/projected/1363220d-423b-45a5-a067-559b8a36f610-kube-api-access-4g5rc\") pod \"5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97\" (UID: \"1363220d-423b-45a5-a067-559b8a36f610\") " pod="openstack-operators/5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97" Dec 05 19:19:08 crc kubenswrapper[4828]: I1205 19:19:08.137074 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97" Dec 05 19:19:08 crc kubenswrapper[4828]: I1205 19:19:08.566166 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97"] Dec 05 19:19:09 crc kubenswrapper[4828]: I1205 19:19:09.584285 4828 generic.go:334] "Generic (PLEG): container finished" podID="1363220d-423b-45a5-a067-559b8a36f610" containerID="112d9689877bf5a4ad0fd3b91515da7876f236ec7f283c1085021b784cc47a52" exitCode=0 Dec 05 19:19:09 crc kubenswrapper[4828]: I1205 19:19:09.584513 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97" event={"ID":"1363220d-423b-45a5-a067-559b8a36f610","Type":"ContainerDied","Data":"112d9689877bf5a4ad0fd3b91515da7876f236ec7f283c1085021b784cc47a52"} Dec 05 19:19:09 crc kubenswrapper[4828]: I1205 19:19:09.584729 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97" event={"ID":"1363220d-423b-45a5-a067-559b8a36f610","Type":"ContainerStarted","Data":"4d1c0c22afb79d9c35cb476754abaa861b5730a165360b9820ff252c64af608b"} Dec 05 19:19:10 crc kubenswrapper[4828]: I1205 19:19:10.596962 4828 generic.go:334] "Generic (PLEG): container finished" podID="1363220d-423b-45a5-a067-559b8a36f610" containerID="35d18f06e19fdd409df3ed68751196a7dd8d65b53ed3e4b373239dd2bdbe4d4d" exitCode=0 Dec 05 19:19:10 crc kubenswrapper[4828]: I1205 19:19:10.597330 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97" event={"ID":"1363220d-423b-45a5-a067-559b8a36f610","Type":"ContainerDied","Data":"35d18f06e19fdd409df3ed68751196a7dd8d65b53ed3e4b373239dd2bdbe4d4d"} Dec 05 19:19:11 crc kubenswrapper[4828]: I1205 19:19:11.607750 4828 generic.go:334] "Generic (PLEG): container finished" podID="1363220d-423b-45a5-a067-559b8a36f610" containerID="c644dbd57a8bb6bd342771bba663fa8c288fd30607ab06003bc73fd8abe02642" exitCode=0 Dec 05 19:19:11 crc kubenswrapper[4828]: I1205 19:19:11.607882 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97" event={"ID":"1363220d-423b-45a5-a067-559b8a36f610","Type":"ContainerDied","Data":"c644dbd57a8bb6bd342771bba663fa8c288fd30607ab06003bc73fd8abe02642"} Dec 05 19:19:12 crc kubenswrapper[4828]: I1205 19:19:12.898536 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97" Dec 05 19:19:12 crc kubenswrapper[4828]: I1205 19:19:12.916243 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1363220d-423b-45a5-a067-559b8a36f610-bundle\") pod \"1363220d-423b-45a5-a067-559b8a36f610\" (UID: \"1363220d-423b-45a5-a067-559b8a36f610\") " Dec 05 19:19:12 crc kubenswrapper[4828]: I1205 19:19:12.916300 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g5rc\" (UniqueName: \"kubernetes.io/projected/1363220d-423b-45a5-a067-559b8a36f610-kube-api-access-4g5rc\") pod \"1363220d-423b-45a5-a067-559b8a36f610\" (UID: \"1363220d-423b-45a5-a067-559b8a36f610\") " Dec 05 19:19:12 crc kubenswrapper[4828]: I1205 19:19:12.916330 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1363220d-423b-45a5-a067-559b8a36f610-util\") pod \"1363220d-423b-45a5-a067-559b8a36f610\" (UID: \"1363220d-423b-45a5-a067-559b8a36f610\") " Dec 05 19:19:12 crc kubenswrapper[4828]: I1205 19:19:12.916916 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1363220d-423b-45a5-a067-559b8a36f610-bundle" (OuterVolumeSpecName: "bundle") pod "1363220d-423b-45a5-a067-559b8a36f610" (UID: "1363220d-423b-45a5-a067-559b8a36f610"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:19:12 crc kubenswrapper[4828]: I1205 19:19:12.920291 4828 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1363220d-423b-45a5-a067-559b8a36f610-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:19:12 crc kubenswrapper[4828]: I1205 19:19:12.926179 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1363220d-423b-45a5-a067-559b8a36f610-kube-api-access-4g5rc" (OuterVolumeSpecName: "kube-api-access-4g5rc") pod "1363220d-423b-45a5-a067-559b8a36f610" (UID: "1363220d-423b-45a5-a067-559b8a36f610"). InnerVolumeSpecName "kube-api-access-4g5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:19:12 crc kubenswrapper[4828]: I1205 19:19:12.929659 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1363220d-423b-45a5-a067-559b8a36f610-util" (OuterVolumeSpecName: "util") pod "1363220d-423b-45a5-a067-559b8a36f610" (UID: "1363220d-423b-45a5-a067-559b8a36f610"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:19:13 crc kubenswrapper[4828]: I1205 19:19:13.021498 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g5rc\" (UniqueName: \"kubernetes.io/projected/1363220d-423b-45a5-a067-559b8a36f610-kube-api-access-4g5rc\") on node \"crc\" DevicePath \"\"" Dec 05 19:19:13 crc kubenswrapper[4828]: I1205 19:19:13.021550 4828 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1363220d-423b-45a5-a067-559b8a36f610-util\") on node \"crc\" DevicePath \"\"" Dec 05 19:19:13 crc kubenswrapper[4828]: I1205 19:19:13.625409 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97" event={"ID":"1363220d-423b-45a5-a067-559b8a36f610","Type":"ContainerDied","Data":"4d1c0c22afb79d9c35cb476754abaa861b5730a165360b9820ff252c64af608b"} Dec 05 19:19:13 crc kubenswrapper[4828]: I1205 19:19:13.625712 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d1c0c22afb79d9c35cb476754abaa861b5730a165360b9820ff252c64af608b" Dec 05 19:19:13 crc kubenswrapper[4828]: I1205 19:19:13.625522 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97" Dec 05 19:19:19 crc kubenswrapper[4828]: I1205 19:19:19.929983 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-56d574f77c-99sf5"] Dec 05 19:19:19 crc kubenswrapper[4828]: E1205 19:19:19.930784 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1363220d-423b-45a5-a067-559b8a36f610" containerName="util" Dec 05 19:19:19 crc kubenswrapper[4828]: I1205 19:19:19.930801 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1363220d-423b-45a5-a067-559b8a36f610" containerName="util" Dec 05 19:19:19 crc kubenswrapper[4828]: E1205 19:19:19.930813 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1363220d-423b-45a5-a067-559b8a36f610" containerName="pull" Dec 05 19:19:19 crc kubenswrapper[4828]: I1205 19:19:19.930838 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1363220d-423b-45a5-a067-559b8a36f610" containerName="pull" Dec 05 19:19:19 crc kubenswrapper[4828]: E1205 19:19:19.930860 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1363220d-423b-45a5-a067-559b8a36f610" containerName="extract" Dec 05 19:19:19 crc kubenswrapper[4828]: I1205 19:19:19.930867 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1363220d-423b-45a5-a067-559b8a36f610" containerName="extract" Dec 05 19:19:19 crc kubenswrapper[4828]: I1205 19:19:19.930998 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="1363220d-423b-45a5-a067-559b8a36f610" containerName="extract" Dec 05 19:19:19 crc kubenswrapper[4828]: I1205 19:19:19.931400 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-56d574f77c-99sf5" Dec 05 19:19:19 crc kubenswrapper[4828]: I1205 19:19:19.933941 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-kcpkl" Dec 05 19:19:20 crc kubenswrapper[4828]: I1205 19:19:20.014211 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8jl8\" (UniqueName: \"kubernetes.io/projected/4ceee1c7-178c-4496-9cdd-c302d5180aca-kube-api-access-w8jl8\") pod \"openstack-operator-controller-operator-56d574f77c-99sf5\" (UID: \"4ceee1c7-178c-4496-9cdd-c302d5180aca\") " pod="openstack-operators/openstack-operator-controller-operator-56d574f77c-99sf5" Dec 05 19:19:20 crc kubenswrapper[4828]: I1205 19:19:20.015306 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-56d574f77c-99sf5"] Dec 05 19:19:20 crc kubenswrapper[4828]: I1205 19:19:20.116332 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8jl8\" (UniqueName: \"kubernetes.io/projected/4ceee1c7-178c-4496-9cdd-c302d5180aca-kube-api-access-w8jl8\") pod \"openstack-operator-controller-operator-56d574f77c-99sf5\" (UID: \"4ceee1c7-178c-4496-9cdd-c302d5180aca\") " pod="openstack-operators/openstack-operator-controller-operator-56d574f77c-99sf5" Dec 05 19:19:20 crc kubenswrapper[4828]: I1205 19:19:20.144520 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8jl8\" (UniqueName: \"kubernetes.io/projected/4ceee1c7-178c-4496-9cdd-c302d5180aca-kube-api-access-w8jl8\") pod \"openstack-operator-controller-operator-56d574f77c-99sf5\" (UID: \"4ceee1c7-178c-4496-9cdd-c302d5180aca\") " pod="openstack-operators/openstack-operator-controller-operator-56d574f77c-99sf5" Dec 05 19:19:20 crc kubenswrapper[4828]: I1205 19:19:20.248021 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-56d574f77c-99sf5" Dec 05 19:19:20 crc kubenswrapper[4828]: I1205 19:19:20.519900 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-56d574f77c-99sf5"] Dec 05 19:19:20 crc kubenswrapper[4828]: W1205 19:19:20.525305 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ceee1c7_178c_4496_9cdd_c302d5180aca.slice/crio-4dbdbb15a75cf5238b9a7f3fb178480677411bbaa600c3950d1a269d5078bcfa WatchSource:0}: Error finding container 4dbdbb15a75cf5238b9a7f3fb178480677411bbaa600c3950d1a269d5078bcfa: Status 404 returned error can't find the container with id 4dbdbb15a75cf5238b9a7f3fb178480677411bbaa600c3950d1a269d5078bcfa Dec 05 19:19:20 crc kubenswrapper[4828]: I1205 19:19:20.669696 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-56d574f77c-99sf5" event={"ID":"4ceee1c7-178c-4496-9cdd-c302d5180aca","Type":"ContainerStarted","Data":"4dbdbb15a75cf5238b9a7f3fb178480677411bbaa600c3950d1a269d5078bcfa"} Dec 05 19:19:21 crc kubenswrapper[4828]: I1205 19:19:21.222722 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r8p52"] Dec 05 19:19:21 crc kubenswrapper[4828]: I1205 19:19:21.224850 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r8p52" Dec 05 19:19:21 crc kubenswrapper[4828]: I1205 19:19:21.228289 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r8p52"] Dec 05 19:19:21 crc kubenswrapper[4828]: I1205 19:19:21.333404 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b4d80f7-3490-4123-b8a6-b6ff2480c593-catalog-content\") pod \"certified-operators-r8p52\" (UID: \"2b4d80f7-3490-4123-b8a6-b6ff2480c593\") " pod="openshift-marketplace/certified-operators-r8p52" Dec 05 19:19:21 crc kubenswrapper[4828]: I1205 19:19:21.333512 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6rjr\" (UniqueName: \"kubernetes.io/projected/2b4d80f7-3490-4123-b8a6-b6ff2480c593-kube-api-access-s6rjr\") pod \"certified-operators-r8p52\" (UID: \"2b4d80f7-3490-4123-b8a6-b6ff2480c593\") " pod="openshift-marketplace/certified-operators-r8p52" Dec 05 19:19:21 crc kubenswrapper[4828]: I1205 19:19:21.333539 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b4d80f7-3490-4123-b8a6-b6ff2480c593-utilities\") pod \"certified-operators-r8p52\" (UID: \"2b4d80f7-3490-4123-b8a6-b6ff2480c593\") " pod="openshift-marketplace/certified-operators-r8p52" Dec 05 19:19:21 crc kubenswrapper[4828]: I1205 19:19:21.434969 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6rjr\" (UniqueName: \"kubernetes.io/projected/2b4d80f7-3490-4123-b8a6-b6ff2480c593-kube-api-access-s6rjr\") pod \"certified-operators-r8p52\" (UID: \"2b4d80f7-3490-4123-b8a6-b6ff2480c593\") " pod="openshift-marketplace/certified-operators-r8p52" Dec 05 19:19:21 crc kubenswrapper[4828]: I1205 19:19:21.435023 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b4d80f7-3490-4123-b8a6-b6ff2480c593-utilities\") pod \"certified-operators-r8p52\" (UID: \"2b4d80f7-3490-4123-b8a6-b6ff2480c593\") " pod="openshift-marketplace/certified-operators-r8p52" Dec 05 19:19:21 crc kubenswrapper[4828]: I1205 19:19:21.435072 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b4d80f7-3490-4123-b8a6-b6ff2480c593-catalog-content\") pod \"certified-operators-r8p52\" (UID: \"2b4d80f7-3490-4123-b8a6-b6ff2480c593\") " pod="openshift-marketplace/certified-operators-r8p52" Dec 05 19:19:21 crc kubenswrapper[4828]: I1205 19:19:21.435587 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b4d80f7-3490-4123-b8a6-b6ff2480c593-catalog-content\") pod \"certified-operators-r8p52\" (UID: \"2b4d80f7-3490-4123-b8a6-b6ff2480c593\") " pod="openshift-marketplace/certified-operators-r8p52" Dec 05 19:19:21 crc kubenswrapper[4828]: I1205 19:19:21.435743 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b4d80f7-3490-4123-b8a6-b6ff2480c593-utilities\") pod \"certified-operators-r8p52\" (UID: \"2b4d80f7-3490-4123-b8a6-b6ff2480c593\") " pod="openshift-marketplace/certified-operators-r8p52" Dec 05 19:19:21 crc kubenswrapper[4828]: I1205 19:19:21.453651 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6rjr\" (UniqueName: \"kubernetes.io/projected/2b4d80f7-3490-4123-b8a6-b6ff2480c593-kube-api-access-s6rjr\") pod \"certified-operators-r8p52\" (UID: \"2b4d80f7-3490-4123-b8a6-b6ff2480c593\") " pod="openshift-marketplace/certified-operators-r8p52" Dec 05 19:19:21 crc kubenswrapper[4828]: I1205 19:19:21.550774 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r8p52" Dec 05 19:19:21 crc kubenswrapper[4828]: I1205 19:19:21.840526 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r8p52"] Dec 05 19:19:22 crc kubenswrapper[4828]: I1205 19:19:22.684429 4828 generic.go:334] "Generic (PLEG): container finished" podID="2b4d80f7-3490-4123-b8a6-b6ff2480c593" containerID="affc33b45655b9a9df6a23900211631b889f6d91bbfa29fafc6bed4f0ecef4b7" exitCode=0 Dec 05 19:19:22 crc kubenswrapper[4828]: I1205 19:19:22.684583 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8p52" event={"ID":"2b4d80f7-3490-4123-b8a6-b6ff2480c593","Type":"ContainerDied","Data":"affc33b45655b9a9df6a23900211631b889f6d91bbfa29fafc6bed4f0ecef4b7"} Dec 05 19:19:22 crc kubenswrapper[4828]: I1205 19:19:22.684694 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8p52" event={"ID":"2b4d80f7-3490-4123-b8a6-b6ff2480c593","Type":"ContainerStarted","Data":"141ef05449539cf620cd1b9befb64a7e7cf02117bd745068e59b893d85865938"} Dec 05 19:19:25 crc kubenswrapper[4828]: I1205 19:19:25.709424 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-56d574f77c-99sf5" event={"ID":"4ceee1c7-178c-4496-9cdd-c302d5180aca","Type":"ContainerStarted","Data":"0c09b16a5d091450261d87f21d4159ab5342b2fe9d4941c1aca60ffa371c5d28"} Dec 05 19:19:25 crc kubenswrapper[4828]: I1205 19:19:25.709834 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-56d574f77c-99sf5" Dec 05 19:19:25 crc kubenswrapper[4828]: I1205 19:19:25.736787 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-56d574f77c-99sf5" podStartSLOduration=1.870314723 podStartE2EDuration="6.736763991s" podCreationTimestamp="2025-12-05 19:19:19 +0000 UTC" firstStartedPulling="2025-12-05 19:19:20.526916651 +0000 UTC m=+938.422138957" lastFinishedPulling="2025-12-05 19:19:25.393365919 +0000 UTC m=+943.288588225" observedRunningTime="2025-12-05 19:19:25.732389453 +0000 UTC m=+943.627611769" watchObservedRunningTime="2025-12-05 19:19:25.736763991 +0000 UTC m=+943.631986297" Dec 05 19:19:26 crc kubenswrapper[4828]: I1205 19:19:26.722077 4828 generic.go:334] "Generic (PLEG): container finished" podID="2b4d80f7-3490-4123-b8a6-b6ff2480c593" containerID="c0b5ea158902caad0da32024f2728debcac590a3f0d2f78b70df7ae7f510a73e" exitCode=0 Dec 05 19:19:26 crc kubenswrapper[4828]: I1205 19:19:26.722159 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8p52" event={"ID":"2b4d80f7-3490-4123-b8a6-b6ff2480c593","Type":"ContainerDied","Data":"c0b5ea158902caad0da32024f2728debcac590a3f0d2f78b70df7ae7f510a73e"} Dec 05 19:19:27 crc kubenswrapper[4828]: I1205 19:19:27.748404 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8p52" event={"ID":"2b4d80f7-3490-4123-b8a6-b6ff2480c593","Type":"ContainerStarted","Data":"c52f7ef13ca2f8b2e5218235989f5fa542e16b5670be4309f75fb0f74ac0a7d4"} Dec 05 19:19:27 crc kubenswrapper[4828]: I1205 19:19:27.778788 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r8p52" podStartSLOduration=3.367352451 podStartE2EDuration="6.778767801s" podCreationTimestamp="2025-12-05 19:19:21 +0000 UTC" firstStartedPulling="2025-12-05 19:19:23.707966837 +0000 UTC m=+941.603189143" lastFinishedPulling="2025-12-05 19:19:27.119382147 +0000 UTC m=+945.014604493" observedRunningTime="2025-12-05 19:19:27.775342789 +0000 UTC m=+945.670565125" watchObservedRunningTime="2025-12-05 19:19:27.778767801 +0000 UTC m=+945.673990117" Dec 05 19:19:30 crc kubenswrapper[4828]: I1205 19:19:30.251974 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-56d574f77c-99sf5" Dec 05 19:19:31 crc kubenswrapper[4828]: I1205 19:19:31.551960 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r8p52" Dec 05 19:19:31 crc kubenswrapper[4828]: I1205 19:19:31.553008 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r8p52" Dec 05 19:19:31 crc kubenswrapper[4828]: I1205 19:19:31.613945 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r8p52" Dec 05 19:19:35 crc kubenswrapper[4828]: I1205 19:19:35.259470 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:19:35 crc kubenswrapper[4828]: I1205 19:19:35.260398 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:19:41 crc kubenswrapper[4828]: I1205 19:19:41.623714 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r8p52" Dec 05 19:19:44 crc kubenswrapper[4828]: I1205 19:19:44.213351 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hkbw9"] Dec 05 19:19:44 crc kubenswrapper[4828]: I1205 19:19:44.216190 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hkbw9" Dec 05 19:19:44 crc kubenswrapper[4828]: I1205 19:19:44.275694 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hkbw9"] Dec 05 19:19:44 crc kubenswrapper[4828]: I1205 19:19:44.317371 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8801cbdf-fab9-4032-a3cb-7bce541b8498-utilities\") pod \"community-operators-hkbw9\" (UID: \"8801cbdf-fab9-4032-a3cb-7bce541b8498\") " pod="openshift-marketplace/community-operators-hkbw9" Dec 05 19:19:44 crc kubenswrapper[4828]: I1205 19:19:44.317570 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqzmd\" (UniqueName: \"kubernetes.io/projected/8801cbdf-fab9-4032-a3cb-7bce541b8498-kube-api-access-sqzmd\") pod \"community-operators-hkbw9\" (UID: \"8801cbdf-fab9-4032-a3cb-7bce541b8498\") " pod="openshift-marketplace/community-operators-hkbw9" Dec 05 19:19:44 crc kubenswrapper[4828]: I1205 19:19:44.317638 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8801cbdf-fab9-4032-a3cb-7bce541b8498-catalog-content\") pod \"community-operators-hkbw9\" (UID: \"8801cbdf-fab9-4032-a3cb-7bce541b8498\") " pod="openshift-marketplace/community-operators-hkbw9" Dec 05 19:19:44 crc kubenswrapper[4828]: I1205 19:19:44.419355 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8801cbdf-fab9-4032-a3cb-7bce541b8498-utilities\") pod \"community-operators-hkbw9\" (UID: \"8801cbdf-fab9-4032-a3cb-7bce541b8498\") " pod="openshift-marketplace/community-operators-hkbw9" Dec 05 19:19:44 crc kubenswrapper[4828]: I1205 19:19:44.419734 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqzmd\" (UniqueName: \"kubernetes.io/projected/8801cbdf-fab9-4032-a3cb-7bce541b8498-kube-api-access-sqzmd\") pod \"community-operators-hkbw9\" (UID: \"8801cbdf-fab9-4032-a3cb-7bce541b8498\") " pod="openshift-marketplace/community-operators-hkbw9" Dec 05 19:19:44 crc kubenswrapper[4828]: I1205 19:19:44.419851 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8801cbdf-fab9-4032-a3cb-7bce541b8498-catalog-content\") pod \"community-operators-hkbw9\" (UID: \"8801cbdf-fab9-4032-a3cb-7bce541b8498\") " pod="openshift-marketplace/community-operators-hkbw9" Dec 05 19:19:44 crc kubenswrapper[4828]: I1205 19:19:44.419948 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8801cbdf-fab9-4032-a3cb-7bce541b8498-utilities\") pod \"community-operators-hkbw9\" (UID: \"8801cbdf-fab9-4032-a3cb-7bce541b8498\") " pod="openshift-marketplace/community-operators-hkbw9" Dec 05 19:19:44 crc kubenswrapper[4828]: I1205 19:19:44.420333 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8801cbdf-fab9-4032-a3cb-7bce541b8498-catalog-content\") pod \"community-operators-hkbw9\" (UID: \"8801cbdf-fab9-4032-a3cb-7bce541b8498\") " pod="openshift-marketplace/community-operators-hkbw9" Dec 05 19:19:44 crc kubenswrapper[4828]: I1205 19:19:44.439846 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqzmd\" (UniqueName: \"kubernetes.io/projected/8801cbdf-fab9-4032-a3cb-7bce541b8498-kube-api-access-sqzmd\") pod \"community-operators-hkbw9\" (UID: \"8801cbdf-fab9-4032-a3cb-7bce541b8498\") " pod="openshift-marketplace/community-operators-hkbw9" Dec 05 19:19:44 crc kubenswrapper[4828]: I1205 19:19:44.537191 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hkbw9" Dec 05 19:19:45 crc kubenswrapper[4828]: I1205 19:19:45.205095 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r8p52"] Dec 05 19:19:45 crc kubenswrapper[4828]: I1205 19:19:45.205562 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r8p52" podUID="2b4d80f7-3490-4123-b8a6-b6ff2480c593" containerName="registry-server" containerID="cri-o://c52f7ef13ca2f8b2e5218235989f5fa542e16b5670be4309f75fb0f74ac0a7d4" gracePeriod=2 Dec 05 19:19:45 crc kubenswrapper[4828]: I1205 19:19:45.274756 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hkbw9"] Dec 05 19:19:45 crc kubenswrapper[4828]: W1205 19:19:45.283101 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8801cbdf_fab9_4032_a3cb_7bce541b8498.slice/crio-7e6cc443c837f55f0b1059722a96cc36806cbdb9106f8220dd579fd4af4f0fe9 WatchSource:0}: Error finding container 7e6cc443c837f55f0b1059722a96cc36806cbdb9106f8220dd579fd4af4f0fe9: Status 404 returned error can't find the container with id 7e6cc443c837f55f0b1059722a96cc36806cbdb9106f8220dd579fd4af4f0fe9 Dec 05 19:19:45 crc kubenswrapper[4828]: I1205 19:19:45.957388 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkbw9" event={"ID":"8801cbdf-fab9-4032-a3cb-7bce541b8498","Type":"ContainerStarted","Data":"7e6cc443c837f55f0b1059722a96cc36806cbdb9106f8220dd579fd4af4f0fe9"} Dec 05 19:19:46 crc kubenswrapper[4828]: I1205 19:19:46.967650 4828 generic.go:334] "Generic (PLEG): container finished" podID="8801cbdf-fab9-4032-a3cb-7bce541b8498" containerID="43ce77b11f931c3ec9610c49e0b44e96b5b3830c3d5920763a7fc688fe539beb" exitCode=0 Dec 05 19:19:46 crc kubenswrapper[4828]: I1205 19:19:46.967731 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkbw9" event={"ID":"8801cbdf-fab9-4032-a3cb-7bce541b8498","Type":"ContainerDied","Data":"43ce77b11f931c3ec9610c49e0b44e96b5b3830c3d5920763a7fc688fe539beb"} Dec 05 19:19:46 crc kubenswrapper[4828]: I1205 19:19:46.970387 4828 generic.go:334] "Generic (PLEG): container finished" podID="2b4d80f7-3490-4123-b8a6-b6ff2480c593" containerID="c52f7ef13ca2f8b2e5218235989f5fa542e16b5670be4309f75fb0f74ac0a7d4" exitCode=0 Dec 05 19:19:46 crc kubenswrapper[4828]: I1205 19:19:46.970422 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8p52" event={"ID":"2b4d80f7-3490-4123-b8a6-b6ff2480c593","Type":"ContainerDied","Data":"c52f7ef13ca2f8b2e5218235989f5fa542e16b5670be4309f75fb0f74ac0a7d4"} Dec 05 19:19:47 crc kubenswrapper[4828]: I1205 19:19:47.464180 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r8p52" Dec 05 19:19:47 crc kubenswrapper[4828]: I1205 19:19:47.567974 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b4d80f7-3490-4123-b8a6-b6ff2480c593-utilities\") pod \"2b4d80f7-3490-4123-b8a6-b6ff2480c593\" (UID: \"2b4d80f7-3490-4123-b8a6-b6ff2480c593\") " Dec 05 19:19:47 crc kubenswrapper[4828]: I1205 19:19:47.568034 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b4d80f7-3490-4123-b8a6-b6ff2480c593-catalog-content\") pod \"2b4d80f7-3490-4123-b8a6-b6ff2480c593\" (UID: \"2b4d80f7-3490-4123-b8a6-b6ff2480c593\") " Dec 05 19:19:47 crc kubenswrapper[4828]: I1205 19:19:47.568124 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6rjr\" (UniqueName: \"kubernetes.io/projected/2b4d80f7-3490-4123-b8a6-b6ff2480c593-kube-api-access-s6rjr\") pod \"2b4d80f7-3490-4123-b8a6-b6ff2480c593\" (UID: \"2b4d80f7-3490-4123-b8a6-b6ff2480c593\") " Dec 05 19:19:47 crc kubenswrapper[4828]: I1205 19:19:47.570659 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b4d80f7-3490-4123-b8a6-b6ff2480c593-utilities" (OuterVolumeSpecName: "utilities") pod "2b4d80f7-3490-4123-b8a6-b6ff2480c593" (UID: "2b4d80f7-3490-4123-b8a6-b6ff2480c593"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:19:47 crc kubenswrapper[4828]: I1205 19:19:47.576002 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b4d80f7-3490-4123-b8a6-b6ff2480c593-kube-api-access-s6rjr" (OuterVolumeSpecName: "kube-api-access-s6rjr") pod "2b4d80f7-3490-4123-b8a6-b6ff2480c593" (UID: "2b4d80f7-3490-4123-b8a6-b6ff2480c593"). InnerVolumeSpecName "kube-api-access-s6rjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:19:47 crc kubenswrapper[4828]: I1205 19:19:47.622077 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b4d80f7-3490-4123-b8a6-b6ff2480c593-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b4d80f7-3490-4123-b8a6-b6ff2480c593" (UID: "2b4d80f7-3490-4123-b8a6-b6ff2480c593"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:19:47 crc kubenswrapper[4828]: I1205 19:19:47.669646 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6rjr\" (UniqueName: \"kubernetes.io/projected/2b4d80f7-3490-4123-b8a6-b6ff2480c593-kube-api-access-s6rjr\") on node \"crc\" DevicePath \"\"" Dec 05 19:19:47 crc kubenswrapper[4828]: I1205 19:19:47.669701 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b4d80f7-3490-4123-b8a6-b6ff2480c593-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 19:19:47 crc kubenswrapper[4828]: I1205 19:19:47.669718 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b4d80f7-3490-4123-b8a6-b6ff2480c593-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 19:19:47 crc kubenswrapper[4828]: I1205 19:19:47.977601 4828 generic.go:334] "Generic (PLEG): container finished" podID="8801cbdf-fab9-4032-a3cb-7bce541b8498" containerID="6fe89f56a6c728f44d9e6b3358cca83fa6439b9e181db525050a696cf113055a" exitCode=0 Dec 05 19:19:47 crc kubenswrapper[4828]: I1205 19:19:47.977679 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkbw9" event={"ID":"8801cbdf-fab9-4032-a3cb-7bce541b8498","Type":"ContainerDied","Data":"6fe89f56a6c728f44d9e6b3358cca83fa6439b9e181db525050a696cf113055a"} Dec 05 19:19:47 crc kubenswrapper[4828]: I1205 19:19:47.980124 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8p52" event={"ID":"2b4d80f7-3490-4123-b8a6-b6ff2480c593","Type":"ContainerDied","Data":"141ef05449539cf620cd1b9befb64a7e7cf02117bd745068e59b893d85865938"} Dec 05 19:19:47 crc kubenswrapper[4828]: I1205 19:19:47.980171 4828 scope.go:117] "RemoveContainer" containerID="c52f7ef13ca2f8b2e5218235989f5fa542e16b5670be4309f75fb0f74ac0a7d4" Dec 05 19:19:47 crc kubenswrapper[4828]: I1205 19:19:47.980194 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r8p52" Dec 05 19:19:48 crc kubenswrapper[4828]: I1205 19:19:48.002429 4828 scope.go:117] "RemoveContainer" containerID="c0b5ea158902caad0da32024f2728debcac590a3f0d2f78b70df7ae7f510a73e" Dec 05 19:19:48 crc kubenswrapper[4828]: I1205 19:19:48.012405 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r8p52"] Dec 05 19:19:48 crc kubenswrapper[4828]: I1205 19:19:48.016774 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r8p52"] Dec 05 19:19:48 crc kubenswrapper[4828]: I1205 19:19:48.040859 4828 scope.go:117] "RemoveContainer" containerID="affc33b45655b9a9df6a23900211631b889f6d91bbfa29fafc6bed4f0ecef4b7" Dec 05 19:19:48 crc kubenswrapper[4828]: I1205 19:19:48.462364 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b4d80f7-3490-4123-b8a6-b6ff2480c593" path="/var/lib/kubelet/pods/2b4d80f7-3490-4123-b8a6-b6ff2480c593/volumes" Dec 05 19:19:48 crc kubenswrapper[4828]: I1205 19:19:48.988561 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkbw9" event={"ID":"8801cbdf-fab9-4032-a3cb-7bce541b8498","Type":"ContainerStarted","Data":"5b4b06c1f76d4ce5d85286335f51cd66e9e070e0286c29eebd59f02697aeb70b"} Dec 05 19:19:49 crc kubenswrapper[4828]: I1205 19:19:49.008147 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hkbw9" podStartSLOduration=3.593392663 podStartE2EDuration="5.008131494s" podCreationTimestamp="2025-12-05 19:19:44 +0000 UTC" firstStartedPulling="2025-12-05 19:19:46.969765043 +0000 UTC m=+964.864987349" lastFinishedPulling="2025-12-05 19:19:48.384503864 +0000 UTC m=+966.279726180" observedRunningTime="2025-12-05 19:19:49.005262967 +0000 UTC m=+966.900485283" watchObservedRunningTime="2025-12-05 19:19:49.008131494 +0000 UTC m=+966.903353800" Dec 05 19:19:54 crc kubenswrapper[4828]: I1205 19:19:54.538172 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hkbw9" Dec 05 19:19:54 crc kubenswrapper[4828]: I1205 19:19:54.538573 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hkbw9" Dec 05 19:19:54 crc kubenswrapper[4828]: I1205 19:19:54.586263 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hkbw9" Dec 05 19:19:55 crc kubenswrapper[4828]: I1205 19:19:55.112158 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hkbw9" Dec 05 19:19:57 crc kubenswrapper[4828]: I1205 19:19:57.014172 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hkbw9"] Dec 05 19:19:57 crc kubenswrapper[4828]: I1205 19:19:57.055315 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hkbw9" podUID="8801cbdf-fab9-4032-a3cb-7bce541b8498" containerName="registry-server" containerID="cri-o://5b4b06c1f76d4ce5d85286335f51cd66e9e070e0286c29eebd59f02697aeb70b" gracePeriod=2 Dec 05 19:19:58 crc kubenswrapper[4828]: I1205 19:19:58.073644 4828 generic.go:334] "Generic (PLEG): container finished" podID="8801cbdf-fab9-4032-a3cb-7bce541b8498" containerID="5b4b06c1f76d4ce5d85286335f51cd66e9e070e0286c29eebd59f02697aeb70b" exitCode=0 Dec 05 19:19:58 crc kubenswrapper[4828]: I1205 19:19:58.074723 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkbw9" event={"ID":"8801cbdf-fab9-4032-a3cb-7bce541b8498","Type":"ContainerDied","Data":"5b4b06c1f76d4ce5d85286335f51cd66e9e070e0286c29eebd59f02697aeb70b"} Dec 05 19:19:58 crc kubenswrapper[4828]: I1205 19:19:58.197656 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hkbw9" Dec 05 19:19:58 crc kubenswrapper[4828]: I1205 19:19:58.353418 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8801cbdf-fab9-4032-a3cb-7bce541b8498-utilities\") pod \"8801cbdf-fab9-4032-a3cb-7bce541b8498\" (UID: \"8801cbdf-fab9-4032-a3cb-7bce541b8498\") " Dec 05 19:19:58 crc kubenswrapper[4828]: I1205 19:19:58.353790 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8801cbdf-fab9-4032-a3cb-7bce541b8498-catalog-content\") pod \"8801cbdf-fab9-4032-a3cb-7bce541b8498\" (UID: \"8801cbdf-fab9-4032-a3cb-7bce541b8498\") " Dec 05 19:19:58 crc kubenswrapper[4828]: I1205 19:19:58.353840 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqzmd\" (UniqueName: \"kubernetes.io/projected/8801cbdf-fab9-4032-a3cb-7bce541b8498-kube-api-access-sqzmd\") pod \"8801cbdf-fab9-4032-a3cb-7bce541b8498\" (UID: \"8801cbdf-fab9-4032-a3cb-7bce541b8498\") " Dec 05 19:19:58 crc kubenswrapper[4828]: I1205 19:19:58.354487 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8801cbdf-fab9-4032-a3cb-7bce541b8498-utilities" (OuterVolumeSpecName: "utilities") pod "8801cbdf-fab9-4032-a3cb-7bce541b8498" (UID: "8801cbdf-fab9-4032-a3cb-7bce541b8498"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:19:58 crc kubenswrapper[4828]: I1205 19:19:58.359029 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8801cbdf-fab9-4032-a3cb-7bce541b8498-kube-api-access-sqzmd" (OuterVolumeSpecName: "kube-api-access-sqzmd") pod "8801cbdf-fab9-4032-a3cb-7bce541b8498" (UID: "8801cbdf-fab9-4032-a3cb-7bce541b8498"). InnerVolumeSpecName "kube-api-access-sqzmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:19:58 crc kubenswrapper[4828]: I1205 19:19:58.456067 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8801cbdf-fab9-4032-a3cb-7bce541b8498-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 19:19:58 crc kubenswrapper[4828]: I1205 19:19:58.456112 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqzmd\" (UniqueName: \"kubernetes.io/projected/8801cbdf-fab9-4032-a3cb-7bce541b8498-kube-api-access-sqzmd\") on node \"crc\" DevicePath \"\"" Dec 05 19:19:58 crc kubenswrapper[4828]: I1205 19:19:58.527666 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8801cbdf-fab9-4032-a3cb-7bce541b8498-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8801cbdf-fab9-4032-a3cb-7bce541b8498" (UID: "8801cbdf-fab9-4032-a3cb-7bce541b8498"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:19:58 crc kubenswrapper[4828]: I1205 19:19:58.557241 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8801cbdf-fab9-4032-a3cb-7bce541b8498-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 19:19:59 crc kubenswrapper[4828]: I1205 19:19:59.081339 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkbw9" event={"ID":"8801cbdf-fab9-4032-a3cb-7bce541b8498","Type":"ContainerDied","Data":"7e6cc443c837f55f0b1059722a96cc36806cbdb9106f8220dd579fd4af4f0fe9"} Dec 05 19:19:59 crc kubenswrapper[4828]: I1205 19:19:59.081395 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hkbw9" Dec 05 19:19:59 crc kubenswrapper[4828]: I1205 19:19:59.081402 4828 scope.go:117] "RemoveContainer" containerID="5b4b06c1f76d4ce5d85286335f51cd66e9e070e0286c29eebd59f02697aeb70b" Dec 05 19:19:59 crc kubenswrapper[4828]: I1205 19:19:59.099943 4828 scope.go:117] "RemoveContainer" containerID="6fe89f56a6c728f44d9e6b3358cca83fa6439b9e181db525050a696cf113055a" Dec 05 19:19:59 crc kubenswrapper[4828]: I1205 19:19:59.108639 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hkbw9"] Dec 05 19:19:59 crc kubenswrapper[4828]: I1205 19:19:59.113713 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hkbw9"] Dec 05 19:19:59 crc kubenswrapper[4828]: I1205 19:19:59.145272 4828 scope.go:117] "RemoveContainer" containerID="43ce77b11f931c3ec9610c49e0b44e96b5b3830c3d5920763a7fc688fe539beb" Dec 05 19:20:00 crc kubenswrapper[4828]: I1205 19:20:00.453461 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8801cbdf-fab9-4032-a3cb-7bce541b8498" path="/var/lib/kubelet/pods/8801cbdf-fab9-4032-a3cb-7bce541b8498/volumes" Dec 05 19:20:05 crc kubenswrapper[4828]: I1205 19:20:05.260071 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:20:05 crc kubenswrapper[4828]: I1205 19:20:05.260356 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:20:05 crc kubenswrapper[4828]: I1205 19:20:05.260397 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" Dec 05 19:20:05 crc kubenswrapper[4828]: I1205 19:20:05.260866 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ba6c96d79cafa37f2c2c4a1d891acafd85624229c151c0bd90de50b84f8cad3b"} pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 19:20:05 crc kubenswrapper[4828]: I1205 19:20:05.260916 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" containerID="cri-o://ba6c96d79cafa37f2c2c4a1d891acafd85624229c151c0bd90de50b84f8cad3b" gracePeriod=600 Dec 05 19:20:06 crc kubenswrapper[4828]: I1205 19:20:06.125349 4828 generic.go:334] "Generic (PLEG): container finished" podID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerID="ba6c96d79cafa37f2c2c4a1d891acafd85624229c151c0bd90de50b84f8cad3b" exitCode=0 Dec 05 19:20:06 crc kubenswrapper[4828]: I1205 19:20:06.125439 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerDied","Data":"ba6c96d79cafa37f2c2c4a1d891acafd85624229c151c0bd90de50b84f8cad3b"} Dec 05 19:20:06 crc kubenswrapper[4828]: I1205 19:20:06.125903 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerStarted","Data":"0cc286b8dceed84d395e55058b3c3160e80eae904633740211fa06dda4862d4f"} Dec 05 19:20:06 crc kubenswrapper[4828]: I1205 19:20:06.125926 4828 scope.go:117] "RemoveContainer" containerID="6e314ace055f344d073229b86b5faa8f9693ed01502a72c37b8b7db2eef860a3" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.687775 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-jbg6n"] Dec 05 19:20:08 crc kubenswrapper[4828]: E1205 19:20:08.690767 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4d80f7-3490-4123-b8a6-b6ff2480c593" containerName="registry-server" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.690817 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4d80f7-3490-4123-b8a6-b6ff2480c593" containerName="registry-server" Dec 05 19:20:08 crc kubenswrapper[4828]: E1205 19:20:08.690875 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4d80f7-3490-4123-b8a6-b6ff2480c593" containerName="extract-content" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.690889 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4d80f7-3490-4123-b8a6-b6ff2480c593" containerName="extract-content" Dec 05 19:20:08 crc kubenswrapper[4828]: E1205 19:20:08.690916 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8801cbdf-fab9-4032-a3cb-7bce541b8498" containerName="extract-content" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.690930 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="8801cbdf-fab9-4032-a3cb-7bce541b8498" containerName="extract-content" Dec 05 19:20:08 crc kubenswrapper[4828]: E1205 19:20:08.690955 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8801cbdf-fab9-4032-a3cb-7bce541b8498" containerName="registry-server" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.690966 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="8801cbdf-fab9-4032-a3cb-7bce541b8498" containerName="registry-server" Dec 05 19:20:08 crc kubenswrapper[4828]: E1205 19:20:08.690993 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8801cbdf-fab9-4032-a3cb-7bce541b8498" containerName="extract-utilities" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.691007 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="8801cbdf-fab9-4032-a3cb-7bce541b8498" containerName="extract-utilities" Dec 05 19:20:08 crc kubenswrapper[4828]: E1205 19:20:08.691094 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4d80f7-3490-4123-b8a6-b6ff2480c593" containerName="extract-utilities" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.691106 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4d80f7-3490-4123-b8a6-b6ff2480c593" containerName="extract-utilities" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.692268 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b4d80f7-3490-4123-b8a6-b6ff2480c593" containerName="registry-server" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.692299 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="8801cbdf-fab9-4032-a3cb-7bce541b8498" containerName="registry-server" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.694445 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-jbg6n" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.698175 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-qftqg"] Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.699812 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qftqg" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.703204 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-zrwsh" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.709204 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-m5t9z" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.710259 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-qftqg"] Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.724643 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-jbg6n"] Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.764879 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-cr94b"] Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.765814 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-cr94b" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.772174 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-cr94b"] Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.773673 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-tsnjf" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.786899 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-v92pz"] Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.787945 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-v92pz" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.791417 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-n5fn9" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.803116 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-k7qf5"] Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.804623 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-k7qf5" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.804694 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mclzg\" (UniqueName: \"kubernetes.io/projected/4276bd34-acab-4936-a044-7d00e33e806f-kube-api-access-mclzg\") pod \"cinder-operator-controller-manager-859b6ccc6-qftqg\" (UID: \"4276bd34-acab-4936-a044-7d00e33e806f\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qftqg" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.804764 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27bgq\" (UniqueName: \"kubernetes.io/projected/1f1ef15a-9832-4ee5-8077-066329f6180a-kube-api-access-27bgq\") pod \"barbican-operator-controller-manager-7d9dfd778-jbg6n\" (UID: \"1f1ef15a-9832-4ee5-8077-066329f6180a\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-jbg6n" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.809342 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-bg24z" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.811877 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-v92pz"] Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.829075 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-g2wd4"] Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.830147 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-g2wd4" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.833730 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-lbcfj" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.841016 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-k7qf5"] Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.866418 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-g2wd4"] Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.892946 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5"] Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.894039 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.900151 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.900278 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-t55z7" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.905578 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27bgq\" (UniqueName: \"kubernetes.io/projected/1f1ef15a-9832-4ee5-8077-066329f6180a-kube-api-access-27bgq\") pod \"barbican-operator-controller-manager-7d9dfd778-jbg6n\" (UID: \"1f1ef15a-9832-4ee5-8077-066329f6180a\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-jbg6n" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.905657 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck5ss\" (UniqueName: \"kubernetes.io/projected/f5bca056-89ff-4e36-82b7-ad44d9dc00d6-kube-api-access-ck5ss\") pod \"heat-operator-controller-manager-5f64f6f8bb-k7qf5\" (UID: \"f5bca056-89ff-4e36-82b7-ad44d9dc00d6\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-k7qf5" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.905712 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2vwv\" (UniqueName: \"kubernetes.io/projected/7dbe4cda-8493-4e63-9544-7dfff2495c65-kube-api-access-b2vwv\") pod \"horizon-operator-controller-manager-68c6d99b8f-g2wd4\" (UID: \"7dbe4cda-8493-4e63-9544-7dfff2495c65\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-g2wd4" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.905746 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mclzg\" (UniqueName: \"kubernetes.io/projected/4276bd34-acab-4936-a044-7d00e33e806f-kube-api-access-mclzg\") pod \"cinder-operator-controller-manager-859b6ccc6-qftqg\" (UID: \"4276bd34-acab-4936-a044-7d00e33e806f\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qftqg" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.905801 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh7nl\" (UniqueName: \"kubernetes.io/projected/16bfe264-a5d1-433e-93ee-c6821e882c4c-kube-api-access-mh7nl\") pod \"designate-operator-controller-manager-78b4bc895b-cr94b\" (UID: \"16bfe264-a5d1-433e-93ee-c6821e882c4c\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-cr94b" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.905909 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7sxn\" (UniqueName: \"kubernetes.io/projected/a27719f3-1ce1-4a2b-876f-f280966f8e8c-kube-api-access-z7sxn\") pod \"glance-operator-controller-manager-77987cd8cd-v92pz\" (UID: \"a27719f3-1ce1-4a2b-876f-f280966f8e8c\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-v92pz" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.917118 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-9jbwm"] Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.918074 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-9jbwm" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.931387 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-8tw24" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.934426 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5"] Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.943852 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27bgq\" (UniqueName: \"kubernetes.io/projected/1f1ef15a-9832-4ee5-8077-066329f6180a-kube-api-access-27bgq\") pod \"barbican-operator-controller-manager-7d9dfd778-jbg6n\" (UID: \"1f1ef15a-9832-4ee5-8077-066329f6180a\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-jbg6n" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.947348 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-9jbwm"] Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.952990 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mclzg\" (UniqueName: \"kubernetes.io/projected/4276bd34-acab-4936-a044-7d00e33e806f-kube-api-access-mclzg\") pod \"cinder-operator-controller-manager-859b6ccc6-qftqg\" (UID: \"4276bd34-acab-4936-a044-7d00e33e806f\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qftqg" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.966525 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-cgkv6"] Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.967695 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cgkv6" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.976965 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-pbmt2"] Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.978043 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-pbmt2" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.979383 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-8s2j7" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.982405 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-cz69b" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.990115 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-77twz"] Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.990989 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-77twz" Dec 05 19:20:08 crc kubenswrapper[4828]: I1205 19:20:08.996388 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-cgkv6"] Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.007687 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5qnd\" (UniqueName: \"kubernetes.io/projected/04671bff-8616-471f-bd46-21e6b17227eb-kube-api-access-j5qnd\") pod \"ironic-operator-controller-manager-6c548fd776-9jbwm\" (UID: \"04671bff-8616-471f-bd46-21e6b17227eb\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-9jbwm" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.007762 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2vwv\" (UniqueName: \"kubernetes.io/projected/7dbe4cda-8493-4e63-9544-7dfff2495c65-kube-api-access-b2vwv\") pod \"horizon-operator-controller-manager-68c6d99b8f-g2wd4\" (UID: \"7dbe4cda-8493-4e63-9544-7dfff2495c65\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-g2wd4" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.007798 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/03c4fc5d-6be1-47b4-9c39-7bb86046dafd-cert\") pod \"infra-operator-controller-manager-575477cdfc-lrhm5\" (UID: \"03c4fc5d-6be1-47b4-9c39-7bb86046dafd\") " pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.007832 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh7nl\" (UniqueName: \"kubernetes.io/projected/16bfe264-a5d1-433e-93ee-c6821e882c4c-kube-api-access-mh7nl\") pod \"designate-operator-controller-manager-78b4bc895b-cr94b\" (UID: \"16bfe264-a5d1-433e-93ee-c6821e882c4c\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-cr94b" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.007858 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7sxn\" (UniqueName: \"kubernetes.io/projected/a27719f3-1ce1-4a2b-876f-f280966f8e8c-kube-api-access-z7sxn\") pod \"glance-operator-controller-manager-77987cd8cd-v92pz\" (UID: \"a27719f3-1ce1-4a2b-876f-f280966f8e8c\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-v92pz" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.007891 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fftjp\" (UniqueName: \"kubernetes.io/projected/03c4fc5d-6be1-47b4-9c39-7bb86046dafd-kube-api-access-fftjp\") pod \"infra-operator-controller-manager-575477cdfc-lrhm5\" (UID: \"03c4fc5d-6be1-47b4-9c39-7bb86046dafd\") " pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.008021 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck5ss\" (UniqueName: \"kubernetes.io/projected/f5bca056-89ff-4e36-82b7-ad44d9dc00d6-kube-api-access-ck5ss\") pod \"heat-operator-controller-manager-5f64f6f8bb-k7qf5\" (UID: \"f5bca056-89ff-4e36-82b7-ad44d9dc00d6\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-k7qf5" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.009635 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-pbmt2"] Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.017204 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-r468r" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.022178 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-jbg6n" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.026386 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-gsczh"] Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.027313 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-gsczh" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.029781 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2vwv\" (UniqueName: \"kubernetes.io/projected/7dbe4cda-8493-4e63-9544-7dfff2495c65-kube-api-access-b2vwv\") pod \"horizon-operator-controller-manager-68c6d99b8f-g2wd4\" (UID: \"7dbe4cda-8493-4e63-9544-7dfff2495c65\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-g2wd4" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.041446 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qftqg" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.043636 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-shzw8" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.044394 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh7nl\" (UniqueName: \"kubernetes.io/projected/16bfe264-a5d1-433e-93ee-c6821e882c4c-kube-api-access-mh7nl\") pod \"designate-operator-controller-manager-78b4bc895b-cr94b\" (UID: \"16bfe264-a5d1-433e-93ee-c6821e882c4c\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-cr94b" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.053194 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7sxn\" (UniqueName: \"kubernetes.io/projected/a27719f3-1ce1-4a2b-876f-f280966f8e8c-kube-api-access-z7sxn\") pod \"glance-operator-controller-manager-77987cd8cd-v92pz\" (UID: \"a27719f3-1ce1-4a2b-876f-f280966f8e8c\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-v92pz" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.053456 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck5ss\" (UniqueName: \"kubernetes.io/projected/f5bca056-89ff-4e36-82b7-ad44d9dc00d6-kube-api-access-ck5ss\") pod \"heat-operator-controller-manager-5f64f6f8bb-k7qf5\" (UID: \"f5bca056-89ff-4e36-82b7-ad44d9dc00d6\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-k7qf5" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.084174 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-l6gtp"] Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.085384 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-l6gtp" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.088833 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-xx99s" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.090883 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-77twz"] Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.095253 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-cfnbh"] Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.100228 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-l6gtp"] Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.100348 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-cfnbh" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.115476 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-cr94b" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.116375 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl65t\" (UniqueName: \"kubernetes.io/projected/a03904e7-57be-4491-b11d-c8e698b718e6-kube-api-access-nl65t\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-gsczh\" (UID: \"a03904e7-57be-4491-b11d-c8e698b718e6\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-gsczh" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.116596 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj424\" (UniqueName: \"kubernetes.io/projected/c6d11d68-9609-432a-a855-4789df83739d-kube-api-access-lj424\") pod \"mariadb-operator-controller-manager-56bbcc9d85-77twz\" (UID: \"c6d11d68-9609-432a-a855-4789df83739d\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-77twz" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.116639 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5qnd\" (UniqueName: \"kubernetes.io/projected/04671bff-8616-471f-bd46-21e6b17227eb-kube-api-access-j5qnd\") pod \"ironic-operator-controller-manager-6c548fd776-9jbwm\" (UID: \"04671bff-8616-471f-bd46-21e6b17227eb\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-9jbwm" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.116686 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/03c4fc5d-6be1-47b4-9c39-7bb86046dafd-cert\") pod \"infra-operator-controller-manager-575477cdfc-lrhm5\" (UID: \"03c4fc5d-6be1-47b4-9c39-7bb86046dafd\") " pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.116716 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkmk4\" (UniqueName: \"kubernetes.io/projected/1e335b54-f84b-4d91-a58e-0348728d171e-kube-api-access-vkmk4\") pod \"keystone-operator-controller-manager-7765d96ddf-pbmt2\" (UID: \"1e335b54-f84b-4d91-a58e-0348728d171e\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-pbmt2" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.116754 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fftjp\" (UniqueName: \"kubernetes.io/projected/03c4fc5d-6be1-47b4-9c39-7bb86046dafd-kube-api-access-fftjp\") pod \"infra-operator-controller-manager-575477cdfc-lrhm5\" (UID: \"03c4fc5d-6be1-47b4-9c39-7bb86046dafd\") " pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.116787 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lt9x\" (UniqueName: \"kubernetes.io/projected/3b18c18d-624d-4d50-95ba-a4f755f74936-kube-api-access-5lt9x\") pod \"manila-operator-controller-manager-7c79b5df47-cgkv6\" (UID: \"3b18c18d-624d-4d50-95ba-a4f755f74936\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cgkv6" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.116401 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-lnwqm" Dec 05 19:20:09 crc kubenswrapper[4828]: E1205 19:20:09.118395 4828 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 05 19:20:09 crc kubenswrapper[4828]: E1205 19:20:09.118462 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03c4fc5d-6be1-47b4-9c39-7bb86046dafd-cert podName:03c4fc5d-6be1-47b4-9c39-7bb86046dafd nodeName:}" failed. No retries permitted until 2025-12-05 19:20:09.618444057 +0000 UTC m=+987.513666363 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/03c4fc5d-6be1-47b4-9c39-7bb86046dafd-cert") pod "infra-operator-controller-manager-575477cdfc-lrhm5" (UID: "03c4fc5d-6be1-47b4-9c39-7bb86046dafd") : secret "infra-operator-webhook-server-cert" not found Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.123065 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-gsczh"] Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.129393 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-v92pz" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.132963 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-k7qf5" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.158199 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-g2wd4" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.169348 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5qnd\" (UniqueName: \"kubernetes.io/projected/04671bff-8616-471f-bd46-21e6b17227eb-kube-api-access-j5qnd\") pod \"ironic-operator-controller-manager-6c548fd776-9jbwm\" (UID: \"04671bff-8616-471f-bd46-21e6b17227eb\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-9jbwm" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.171475 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fftjp\" (UniqueName: \"kubernetes.io/projected/03c4fc5d-6be1-47b4-9c39-7bb86046dafd-kube-api-access-fftjp\") pod \"infra-operator-controller-manager-575477cdfc-lrhm5\" (UID: \"03c4fc5d-6be1-47b4-9c39-7bb86046dafd\") " pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.196988 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-cfnbh"] Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.219219 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lt9x\" (UniqueName: \"kubernetes.io/projected/3b18c18d-624d-4d50-95ba-a4f755f74936-kube-api-access-5lt9x\") pod \"manila-operator-controller-manager-7c79b5df47-cgkv6\" (UID: \"3b18c18d-624d-4d50-95ba-a4f755f74936\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cgkv6" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.219265 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8fd5\" (UniqueName: \"kubernetes.io/projected/a5d6b211-6f88-45fe-8e38-608271465dfe-kube-api-access-p8fd5\") pod \"octavia-operator-controller-manager-998648c74-cfnbh\" (UID: \"a5d6b211-6f88-45fe-8e38-608271465dfe\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-cfnbh" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.219290 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl65t\" (UniqueName: \"kubernetes.io/projected/a03904e7-57be-4491-b11d-c8e698b718e6-kube-api-access-nl65t\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-gsczh\" (UID: \"a03904e7-57be-4491-b11d-c8e698b718e6\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-gsczh" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.219315 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj424\" (UniqueName: \"kubernetes.io/projected/c6d11d68-9609-432a-a855-4789df83739d-kube-api-access-lj424\") pod \"mariadb-operator-controller-manager-56bbcc9d85-77twz\" (UID: \"c6d11d68-9609-432a-a855-4789df83739d\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-77twz" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.219366 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gprx5\" (UniqueName: \"kubernetes.io/projected/757d5884-94d5-45f1-ae2c-49fd93ce512c-kube-api-access-gprx5\") pod \"nova-operator-controller-manager-697bc559fc-l6gtp\" (UID: \"757d5884-94d5-45f1-ae2c-49fd93ce512c\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-l6gtp" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.219461 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkmk4\" (UniqueName: \"kubernetes.io/projected/1e335b54-f84b-4d91-a58e-0348728d171e-kube-api-access-vkmk4\") pod \"keystone-operator-controller-manager-7765d96ddf-pbmt2\" (UID: \"1e335b54-f84b-4d91-a58e-0348728d171e\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-pbmt2" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.223911 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j"] Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.225634 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.231435 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-4gr5g"] Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.232741 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-4gr5g" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.233391 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-m9dql" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.233417 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.237301 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-vph44" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.237593 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j"] Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.240740 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl65t\" (UniqueName: \"kubernetes.io/projected/a03904e7-57be-4491-b11d-c8e698b718e6-kube-api-access-nl65t\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-gsczh\" (UID: \"a03904e7-57be-4491-b11d-c8e698b718e6\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-gsczh" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.246546 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj424\" (UniqueName: \"kubernetes.io/projected/c6d11d68-9609-432a-a855-4789df83739d-kube-api-access-lj424\") pod \"mariadb-operator-controller-manager-56bbcc9d85-77twz\" (UID: \"c6d11d68-9609-432a-a855-4789df83739d\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-77twz" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.246959 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkmk4\" (UniqueName: \"kubernetes.io/projected/1e335b54-f84b-4d91-a58e-0348728d171e-kube-api-access-vkmk4\") pod \"keystone-operator-controller-manager-7765d96ddf-pbmt2\" (UID: \"1e335b54-f84b-4d91-a58e-0348728d171e\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-pbmt2" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.247472 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lt9x\" (UniqueName: \"kubernetes.io/projected/3b18c18d-624d-4d50-95ba-a4f755f74936-kube-api-access-5lt9x\") pod \"manila-operator-controller-manager-7c79b5df47-cgkv6\" (UID: \"3b18c18d-624d-4d50-95ba-a4f755f74936\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cgkv6" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.252870 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-4gr5g"] Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.257942 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-cf5gg"] Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.259420 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-cf5gg" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.261345 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-c7tvm" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.263363 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-cf5gg"] Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.284056 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6xg2c"] Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.286014 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6xg2c" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.289009 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-r54bg" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.292558 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-9jbwm" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.299595 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6xg2c"] Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.300084 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cgkv6" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.323759 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvb9k\" (UniqueName: \"kubernetes.io/projected/1ce74c6c-ee96-4712-983f-4090e176f31e-kube-api-access-dvb9k\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j\" (UID: \"1ce74c6c-ee96-4712-983f-4090e176f31e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.323913 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8fd5\" (UniqueName: \"kubernetes.io/projected/a5d6b211-6f88-45fe-8e38-608271465dfe-kube-api-access-p8fd5\") pod \"octavia-operator-controller-manager-998648c74-cfnbh\" (UID: \"a5d6b211-6f88-45fe-8e38-608271465dfe\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-cfnbh" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.323959 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwgx9\" (UniqueName: \"kubernetes.io/projected/ba58375c-b3fa-4eb8-8813-c55f003674ca-kube-api-access-pwgx9\") pod \"ovn-operator-controller-manager-b6456fdb6-4gr5g\" (UID: \"ba58375c-b3fa-4eb8-8813-c55f003674ca\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-4gr5g" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.324019 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkpf9\" (UniqueName: \"kubernetes.io/projected/cd2986fb-f299-446c-85b7-28427df0ca51-kube-api-access-mkpf9\") pod \"placement-operator-controller-manager-78f8948974-cf5gg\" (UID: \"cd2986fb-f299-446c-85b7-28427df0ca51\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-cf5gg" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.324049 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gprx5\" (UniqueName: \"kubernetes.io/projected/757d5884-94d5-45f1-ae2c-49fd93ce512c-kube-api-access-gprx5\") pod \"nova-operator-controller-manager-697bc559fc-l6gtp\" (UID: \"757d5884-94d5-45f1-ae2c-49fd93ce512c\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-l6gtp" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.324120 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ce74c6c-ee96-4712-983f-4090e176f31e-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j\" (UID: \"1ce74c6c-ee96-4712-983f-4090e176f31e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.326723 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-pbmt2" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.327033 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-hdxm9"] Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.328126 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-hdxm9" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.336069 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-ksgj2" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.349959 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-hdxm9"] Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.352236 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8fd5\" (UniqueName: \"kubernetes.io/projected/a5d6b211-6f88-45fe-8e38-608271465dfe-kube-api-access-p8fd5\") pod \"octavia-operator-controller-manager-998648c74-cfnbh\" (UID: \"a5d6b211-6f88-45fe-8e38-608271465dfe\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-cfnbh" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.369336 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gprx5\" (UniqueName: \"kubernetes.io/projected/757d5884-94d5-45f1-ae2c-49fd93ce512c-kube-api-access-gprx5\") pod \"nova-operator-controller-manager-697bc559fc-l6gtp\" (UID: \"757d5884-94d5-45f1-ae2c-49fd93ce512c\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-l6gtp" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.389946 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-h2d97"] Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.391031 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h2d97" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.395907 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-vqw4d" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.400971 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-77twz" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.413185 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-h2d97"] Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.426995 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkpf9\" (UniqueName: \"kubernetes.io/projected/cd2986fb-f299-446c-85b7-28427df0ca51-kube-api-access-mkpf9\") pod \"placement-operator-controller-manager-78f8948974-cf5gg\" (UID: \"cd2986fb-f299-446c-85b7-28427df0ca51\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-cf5gg" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.427065 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ce74c6c-ee96-4712-983f-4090e176f31e-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j\" (UID: \"1ce74c6c-ee96-4712-983f-4090e176f31e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.427140 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvb9k\" (UniqueName: \"kubernetes.io/projected/1ce74c6c-ee96-4712-983f-4090e176f31e-kube-api-access-dvb9k\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j\" (UID: \"1ce74c6c-ee96-4712-983f-4090e176f31e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.427179 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8f2h\" (UniqueName: \"kubernetes.io/projected/8c1110f4-40af-416e-9624-22a901897000-kube-api-access-h8f2h\") pod \"telemetry-operator-controller-manager-76cc84c6bb-hdxm9\" (UID: \"8c1110f4-40af-416e-9624-22a901897000\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-hdxm9" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.427214 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccpjp\" (UniqueName: \"kubernetes.io/projected/48135908-b8f6-47ab-aeb7-3f74bb3e2cde-kube-api-access-ccpjp\") pod \"swift-operator-controller-manager-5f8c65bbfc-6xg2c\" (UID: \"48135908-b8f6-47ab-aeb7-3f74bb3e2cde\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6xg2c" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.427242 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwgx9\" (UniqueName: \"kubernetes.io/projected/ba58375c-b3fa-4eb8-8813-c55f003674ca-kube-api-access-pwgx9\") pod \"ovn-operator-controller-manager-b6456fdb6-4gr5g\" (UID: \"ba58375c-b3fa-4eb8-8813-c55f003674ca\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-4gr5g" Dec 05 19:20:09 crc kubenswrapper[4828]: E1205 19:20:09.427443 4828 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 19:20:09 crc kubenswrapper[4828]: E1205 19:20:09.427516 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1ce74c6c-ee96-4712-983f-4090e176f31e-cert podName:1ce74c6c-ee96-4712-983f-4090e176f31e nodeName:}" failed. No retries permitted until 2025-12-05 19:20:09.927499996 +0000 UTC m=+987.822722302 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1ce74c6c-ee96-4712-983f-4090e176f31e-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j" (UID: "1ce74c6c-ee96-4712-983f-4090e176f31e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.450597 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-gsczh" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.458779 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkpf9\" (UniqueName: \"kubernetes.io/projected/cd2986fb-f299-446c-85b7-28427df0ca51-kube-api-access-mkpf9\") pod \"placement-operator-controller-manager-78f8948974-cf5gg\" (UID: \"cd2986fb-f299-446c-85b7-28427df0ca51\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-cf5gg" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.464053 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-qdslg"] Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.465060 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-qdslg" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.471347 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-m8d4w" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.489185 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-qdslg"] Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.519658 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-cfnbh" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.520087 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-l6gtp" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.525881 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx"] Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.526797 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.529763 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-2m4fx" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.529948 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.530012 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.528395 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdlqj\" (UniqueName: \"kubernetes.io/projected/13474ecf-c76e-400f-bc72-70c11ab8356b-kube-api-access-sdlqj\") pod \"test-operator-controller-manager-5854674fcc-h2d97\" (UID: \"13474ecf-c76e-400f-bc72-70c11ab8356b\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-h2d97" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.530284 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwgx9\" (UniqueName: \"kubernetes.io/projected/ba58375c-b3fa-4eb8-8813-c55f003674ca-kube-api-access-pwgx9\") pod \"ovn-operator-controller-manager-b6456fdb6-4gr5g\" (UID: \"ba58375c-b3fa-4eb8-8813-c55f003674ca\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-4gr5g" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.530312 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8f2h\" (UniqueName: \"kubernetes.io/projected/8c1110f4-40af-416e-9624-22a901897000-kube-api-access-h8f2h\") pod \"telemetry-operator-controller-manager-76cc84c6bb-hdxm9\" (UID: \"8c1110f4-40af-416e-9624-22a901897000\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-hdxm9" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.530346 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp9tw\" (UniqueName: \"kubernetes.io/projected/bf305ed3-e27f-42bc-9fb7-bec903ca820f-kube-api-access-fp9tw\") pod \"watcher-operator-controller-manager-769dc69bc-qdslg\" (UID: \"bf305ed3-e27f-42bc-9fb7-bec903ca820f\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-qdslg" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.530398 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccpjp\" (UniqueName: \"kubernetes.io/projected/48135908-b8f6-47ab-aeb7-3f74bb3e2cde-kube-api-access-ccpjp\") pod \"swift-operator-controller-manager-5f8c65bbfc-6xg2c\" (UID: \"48135908-b8f6-47ab-aeb7-3f74bb3e2cde\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6xg2c" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.549287 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvb9k\" (UniqueName: \"kubernetes.io/projected/1ce74c6c-ee96-4712-983f-4090e176f31e-kube-api-access-dvb9k\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j\" (UID: \"1ce74c6c-ee96-4712-983f-4090e176f31e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.562292 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8f2h\" (UniqueName: \"kubernetes.io/projected/8c1110f4-40af-416e-9624-22a901897000-kube-api-access-h8f2h\") pod \"telemetry-operator-controller-manager-76cc84c6bb-hdxm9\" (UID: \"8c1110f4-40af-416e-9624-22a901897000\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-hdxm9" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.564841 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccpjp\" (UniqueName: \"kubernetes.io/projected/48135908-b8f6-47ab-aeb7-3f74bb3e2cde-kube-api-access-ccpjp\") pod \"swift-operator-controller-manager-5f8c65bbfc-6xg2c\" (UID: \"48135908-b8f6-47ab-aeb7-3f74bb3e2cde\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6xg2c" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.567319 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-4gr5g" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.574438 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx"] Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.605502 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-cf5gg" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.681846 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6xg2c" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.687140 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-webhook-certs\") pod \"openstack-operator-controller-manager-d5958f94b-76zjx\" (UID: \"408ecf49-524f-4743-9cef-5c65877dd176\") " pod="openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.687236 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp9tw\" (UniqueName: \"kubernetes.io/projected/bf305ed3-e27f-42bc-9fb7-bec903ca820f-kube-api-access-fp9tw\") pod \"watcher-operator-controller-manager-769dc69bc-qdslg\" (UID: \"bf305ed3-e27f-42bc-9fb7-bec903ca820f\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-qdslg" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.687307 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdlqj\" (UniqueName: \"kubernetes.io/projected/13474ecf-c76e-400f-bc72-70c11ab8356b-kube-api-access-sdlqj\") pod \"test-operator-controller-manager-5854674fcc-h2d97\" (UID: \"13474ecf-c76e-400f-bc72-70c11ab8356b\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-h2d97" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.687410 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n44hs\" (UniqueName: \"kubernetes.io/projected/408ecf49-524f-4743-9cef-5c65877dd176-kube-api-access-n44hs\") pod \"openstack-operator-controller-manager-d5958f94b-76zjx\" (UID: \"408ecf49-524f-4743-9cef-5c65877dd176\") " pod="openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.695531 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-metrics-certs\") pod \"openstack-operator-controller-manager-d5958f94b-76zjx\" (UID: \"408ecf49-524f-4743-9cef-5c65877dd176\") " pod="openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.695675 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/03c4fc5d-6be1-47b4-9c39-7bb86046dafd-cert\") pod \"infra-operator-controller-manager-575477cdfc-lrhm5\" (UID: \"03c4fc5d-6be1-47b4-9c39-7bb86046dafd\") " pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:20:09 crc kubenswrapper[4828]: E1205 19:20:09.696098 4828 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 05 19:20:09 crc kubenswrapper[4828]: E1205 19:20:09.696277 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03c4fc5d-6be1-47b4-9c39-7bb86046dafd-cert podName:03c4fc5d-6be1-47b4-9c39-7bb86046dafd nodeName:}" failed. No retries permitted until 2025-12-05 19:20:10.696242461 +0000 UTC m=+988.591464767 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/03c4fc5d-6be1-47b4-9c39-7bb86046dafd-cert") pod "infra-operator-controller-manager-575477cdfc-lrhm5" (UID: "03c4fc5d-6be1-47b4-9c39-7bb86046dafd") : secret "infra-operator-webhook-server-cert" not found Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.705487 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-hdxm9" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.729056 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp9tw\" (UniqueName: \"kubernetes.io/projected/bf305ed3-e27f-42bc-9fb7-bec903ca820f-kube-api-access-fp9tw\") pod \"watcher-operator-controller-manager-769dc69bc-qdslg\" (UID: \"bf305ed3-e27f-42bc-9fb7-bec903ca820f\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-qdslg" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.742601 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdlqj\" (UniqueName: \"kubernetes.io/projected/13474ecf-c76e-400f-bc72-70c11ab8356b-kube-api-access-sdlqj\") pod \"test-operator-controller-manager-5854674fcc-h2d97\" (UID: \"13474ecf-c76e-400f-bc72-70c11ab8356b\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-h2d97" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.748974 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w4rgq"] Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.755096 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w4rgq" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.761269 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-r9gf2" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.763422 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w4rgq"] Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.797471 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-metrics-certs\") pod \"openstack-operator-controller-manager-d5958f94b-76zjx\" (UID: \"408ecf49-524f-4743-9cef-5c65877dd176\") " pod="openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.797553 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-webhook-certs\") pod \"openstack-operator-controller-manager-d5958f94b-76zjx\" (UID: \"408ecf49-524f-4743-9cef-5c65877dd176\") " pod="openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.797630 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n44hs\" (UniqueName: \"kubernetes.io/projected/408ecf49-524f-4743-9cef-5c65877dd176-kube-api-access-n44hs\") pod \"openstack-operator-controller-manager-d5958f94b-76zjx\" (UID: \"408ecf49-524f-4743-9cef-5c65877dd176\") " pod="openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx" Dec 05 19:20:09 crc kubenswrapper[4828]: E1205 19:20:09.797995 4828 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 05 19:20:09 crc kubenswrapper[4828]: E1205 19:20:09.798113 4828 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 05 19:20:09 crc kubenswrapper[4828]: E1205 19:20:09.798357 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-metrics-certs podName:408ecf49-524f-4743-9cef-5c65877dd176 nodeName:}" failed. No retries permitted until 2025-12-05 19:20:10.298102162 +0000 UTC m=+988.193324468 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-metrics-certs") pod "openstack-operator-controller-manager-d5958f94b-76zjx" (UID: "408ecf49-524f-4743-9cef-5c65877dd176") : secret "metrics-server-cert" not found Dec 05 19:20:09 crc kubenswrapper[4828]: E1205 19:20:09.798399 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-webhook-certs podName:408ecf49-524f-4743-9cef-5c65877dd176 nodeName:}" failed. No retries permitted until 2025-12-05 19:20:10.29838808 +0000 UTC m=+988.193610476 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-webhook-certs") pod "openstack-operator-controller-manager-d5958f94b-76zjx" (UID: "408ecf49-524f-4743-9cef-5c65877dd176") : secret "webhook-server-cert" not found Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.818640 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n44hs\" (UniqueName: \"kubernetes.io/projected/408ecf49-524f-4743-9cef-5c65877dd176-kube-api-access-n44hs\") pod \"openstack-operator-controller-manager-d5958f94b-76zjx\" (UID: \"408ecf49-524f-4743-9cef-5c65877dd176\") " pod="openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.850275 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h2d97" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.899249 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j5hm\" (UniqueName: \"kubernetes.io/projected/dabc71c3-947a-4d4c-90bd-b5bb473ce013-kube-api-access-4j5hm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-w4rgq\" (UID: \"dabc71c3-947a-4d4c-90bd-b5bb473ce013\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w4rgq" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.910755 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-qdslg" Dec 05 19:20:09 crc kubenswrapper[4828]: I1205 19:20:09.958027 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-qftqg"] Dec 05 19:20:10 crc kubenswrapper[4828]: I1205 19:20:10.000271 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j5hm\" (UniqueName: \"kubernetes.io/projected/dabc71c3-947a-4d4c-90bd-b5bb473ce013-kube-api-access-4j5hm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-w4rgq\" (UID: \"dabc71c3-947a-4d4c-90bd-b5bb473ce013\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w4rgq" Dec 05 19:20:10 crc kubenswrapper[4828]: I1205 19:20:10.000328 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ce74c6c-ee96-4712-983f-4090e176f31e-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j\" (UID: \"1ce74c6c-ee96-4712-983f-4090e176f31e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j" Dec 05 19:20:10 crc kubenswrapper[4828]: E1205 19:20:10.000466 4828 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 19:20:10 crc kubenswrapper[4828]: E1205 19:20:10.000512 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1ce74c6c-ee96-4712-983f-4090e176f31e-cert podName:1ce74c6c-ee96-4712-983f-4090e176f31e nodeName:}" failed. No retries permitted until 2025-12-05 19:20:11.000497098 +0000 UTC m=+988.895719414 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1ce74c6c-ee96-4712-983f-4090e176f31e-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j" (UID: "1ce74c6c-ee96-4712-983f-4090e176f31e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 19:20:10 crc kubenswrapper[4828]: I1205 19:20:10.022616 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j5hm\" (UniqueName: \"kubernetes.io/projected/dabc71c3-947a-4d4c-90bd-b5bb473ce013-kube-api-access-4j5hm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-w4rgq\" (UID: \"dabc71c3-947a-4d4c-90bd-b5bb473ce013\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w4rgq" Dec 05 19:20:10 crc kubenswrapper[4828]: I1205 19:20:10.079248 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-cr94b"] Dec 05 19:20:10 crc kubenswrapper[4828]: I1205 19:20:10.085798 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w4rgq" Dec 05 19:20:10 crc kubenswrapper[4828]: I1205 19:20:10.085934 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-jbg6n"] Dec 05 19:20:10 crc kubenswrapper[4828]: I1205 19:20:10.126682 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-v92pz"] Dec 05 19:20:10 crc kubenswrapper[4828]: I1205 19:20:10.139606 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-g2wd4"] Dec 05 19:20:10 crc kubenswrapper[4828]: I1205 19:20:10.176244 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qftqg" event={"ID":"4276bd34-acab-4936-a044-7d00e33e806f","Type":"ContainerStarted","Data":"2115b8f1ad8aa18fa83b1529a2c87c92c6a4b7c7ea371d794b5ee82ea660c683"} Dec 05 19:20:10 crc kubenswrapper[4828]: W1205 19:20:10.187095 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda27719f3_1ce1_4a2b_876f_f280966f8e8c.slice/crio-7bd37c140236f6c1c35cba7768d51f7336d68593ba5bc568315c32b5e4c3ed68 WatchSource:0}: Error finding container 7bd37c140236f6c1c35cba7768d51f7336d68593ba5bc568315c32b5e4c3ed68: Status 404 returned error can't find the container with id 7bd37c140236f6c1c35cba7768d51f7336d68593ba5bc568315c32b5e4c3ed68 Dec 05 19:20:10 crc kubenswrapper[4828]: I1205 19:20:10.188678 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-jbg6n" event={"ID":"1f1ef15a-9832-4ee5-8077-066329f6180a","Type":"ContainerStarted","Data":"d62a3730ca1ac8a5f7d20db46092284275fc8c4c92737748acd30f0e5e13d712"} Dec 05 19:20:10 crc kubenswrapper[4828]: I1205 19:20:10.265371 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-9jbwm"] Dec 05 19:20:10 crc kubenswrapper[4828]: I1205 19:20:10.274562 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-k7qf5"] Dec 05 19:20:10 crc kubenswrapper[4828]: I1205 19:20:10.306741 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-metrics-certs\") pod \"openstack-operator-controller-manager-d5958f94b-76zjx\" (UID: \"408ecf49-524f-4743-9cef-5c65877dd176\") " pod="openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx" Dec 05 19:20:10 crc kubenswrapper[4828]: I1205 19:20:10.306813 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-webhook-certs\") pod \"openstack-operator-controller-manager-d5958f94b-76zjx\" (UID: \"408ecf49-524f-4743-9cef-5c65877dd176\") " pod="openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx" Dec 05 19:20:10 crc kubenswrapper[4828]: E1205 19:20:10.306943 4828 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 05 19:20:10 crc kubenswrapper[4828]: E1205 19:20:10.307003 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-webhook-certs podName:408ecf49-524f-4743-9cef-5c65877dd176 nodeName:}" failed. No retries permitted until 2025-12-05 19:20:11.306989186 +0000 UTC m=+989.202211492 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-webhook-certs") pod "openstack-operator-controller-manager-d5958f94b-76zjx" (UID: "408ecf49-524f-4743-9cef-5c65877dd176") : secret "webhook-server-cert" not found Dec 05 19:20:10 crc kubenswrapper[4828]: E1205 19:20:10.307284 4828 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 05 19:20:10 crc kubenswrapper[4828]: E1205 19:20:10.308175 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-metrics-certs podName:408ecf49-524f-4743-9cef-5c65877dd176 nodeName:}" failed. No retries permitted until 2025-12-05 19:20:11.307329716 +0000 UTC m=+989.202552022 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-metrics-certs") pod "openstack-operator-controller-manager-d5958f94b-76zjx" (UID: "408ecf49-524f-4743-9cef-5c65877dd176") : secret "metrics-server-cert" not found Dec 05 19:20:10 crc kubenswrapper[4828]: I1205 19:20:10.402998 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-pbmt2"] Dec 05 19:20:10 crc kubenswrapper[4828]: W1205 19:20:10.415210 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e335b54_f84b_4d91_a58e_0348728d171e.slice/crio-ee54094c0d012cb0897b04b4399793cd4b9948210bc78bbcdd8f55d3f17c0988 WatchSource:0}: Error finding container ee54094c0d012cb0897b04b4399793cd4b9948210bc78bbcdd8f55d3f17c0988: Status 404 returned error can't find the container with id ee54094c0d012cb0897b04b4399793cd4b9948210bc78bbcdd8f55d3f17c0988 Dec 05 19:20:10 crc kubenswrapper[4828]: I1205 19:20:10.426741 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-gsczh"] Dec 05 19:20:10 crc kubenswrapper[4828]: W1205 19:20:10.428498 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda03904e7_57be_4491_b11d_c8e698b718e6.slice/crio-b8b66312f518a28c30493e513a0e234e5b07d819297d06d9f91314bfa699d801 WatchSource:0}: Error finding container b8b66312f518a28c30493e513a0e234e5b07d819297d06d9f91314bfa699d801: Status 404 returned error can't find the container with id b8b66312f518a28c30493e513a0e234e5b07d819297d06d9f91314bfa699d801 Dec 05 19:20:10 crc kubenswrapper[4828]: I1205 19:20:10.663407 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-77twz"] Dec 05 19:20:10 crc kubenswrapper[4828]: I1205 19:20:10.668135 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-cf5gg"] Dec 05 19:20:10 crc kubenswrapper[4828]: I1205 19:20:10.684637 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-cgkv6"] Dec 05 19:20:10 crc kubenswrapper[4828]: I1205 19:20:10.689967 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-cfnbh"] Dec 05 19:20:10 crc kubenswrapper[4828]: I1205 19:20:10.714763 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/03c4fc5d-6be1-47b4-9c39-7bb86046dafd-cert\") pod \"infra-operator-controller-manager-575477cdfc-lrhm5\" (UID: \"03c4fc5d-6be1-47b4-9c39-7bb86046dafd\") " pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:20:10 crc kubenswrapper[4828]: E1205 19:20:10.715051 4828 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 05 19:20:10 crc kubenswrapper[4828]: E1205 19:20:10.715426 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03c4fc5d-6be1-47b4-9c39-7bb86046dafd-cert podName:03c4fc5d-6be1-47b4-9c39-7bb86046dafd nodeName:}" failed. No retries permitted until 2025-12-05 19:20:12.715404948 +0000 UTC m=+990.610627264 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/03c4fc5d-6be1-47b4-9c39-7bb86046dafd-cert") pod "infra-operator-controller-manager-575477cdfc-lrhm5" (UID: "03c4fc5d-6be1-47b4-9c39-7bb86046dafd") : secret "infra-operator-webhook-server-cert" not found Dec 05 19:20:10 crc kubenswrapper[4828]: W1205 19:20:10.716657 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5d6b211_6f88_45fe_8e38_608271465dfe.slice/crio-031ce4c2e48127f23845373a677b4794e2d8e2549323a2651801bafbc3d4e713 WatchSource:0}: Error finding container 031ce4c2e48127f23845373a677b4794e2d8e2549323a2651801bafbc3d4e713: Status 404 returned error can't find the container with id 031ce4c2e48127f23845373a677b4794e2d8e2549323a2651801bafbc3d4e713 Dec 05 19:20:10 crc kubenswrapper[4828]: W1205 19:20:10.719724 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b18c18d_624d_4d50_95ba_a4f755f74936.slice/crio-ff48eadcb30cca42620228c1b7663349b30aa7f4ec7c319521704045d0d8702f WatchSource:0}: Error finding container ff48eadcb30cca42620228c1b7663349b30aa7f4ec7c319521704045d0d8702f: Status 404 returned error can't find the container with id ff48eadcb30cca42620228c1b7663349b30aa7f4ec7c319521704045d0d8702f Dec 05 19:20:10 crc kubenswrapper[4828]: I1205 19:20:10.814349 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w4rgq"] Dec 05 19:20:10 crc kubenswrapper[4828]: I1205 19:20:10.821068 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-hdxm9"] Dec 05 19:20:10 crc kubenswrapper[4828]: I1205 19:20:10.826146 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-qdslg"] Dec 05 19:20:10 crc kubenswrapper[4828]: E1205 19:20:10.831085 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pwgx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-4gr5g_openstack-operators(ba58375c-b3fa-4eb8-8813-c55f003674ca): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 19:20:10 crc kubenswrapper[4828]: I1205 19:20:10.832514 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-4gr5g"] Dec 05 19:20:10 crc kubenswrapper[4828]: E1205 19:20:10.833708 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pwgx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-4gr5g_openstack-operators(ba58375c-b3fa-4eb8-8813-c55f003674ca): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 19:20:10 crc kubenswrapper[4828]: W1205 19:20:10.834198 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c1110f4_40af_416e_9624_22a901897000.slice/crio-a7be6aea7d019621d27dd2339f3774df0bd0a67c6f77208fd57db635f5695d8c WatchSource:0}: Error finding container a7be6aea7d019621d27dd2339f3774df0bd0a67c6f77208fd57db635f5695d8c: Status 404 returned error can't find the container with id a7be6aea7d019621d27dd2339f3774df0bd0a67c6f77208fd57db635f5695d8c Dec 05 19:20:10 crc kubenswrapper[4828]: E1205 19:20:10.835270 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-4gr5g" podUID="ba58375c-b3fa-4eb8-8813-c55f003674ca" Dec 05 19:20:10 crc kubenswrapper[4828]: I1205 19:20:10.838833 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6xg2c"] Dec 05 19:20:10 crc kubenswrapper[4828]: E1205 19:20:10.839096 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h8f2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-hdxm9_openstack-operators(8c1110f4-40af-416e-9624-22a901897000): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 19:20:10 crc kubenswrapper[4828]: E1205 19:20:10.842243 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h8f2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-hdxm9_openstack-operators(8c1110f4-40af-416e-9624-22a901897000): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 19:20:10 crc kubenswrapper[4828]: E1205 19:20:10.843844 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-hdxm9" podUID="8c1110f4-40af-416e-9624-22a901897000" Dec 05 19:20:10 crc kubenswrapper[4828]: E1205 19:20:10.844584 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sdlqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-h2d97_openstack-operators(13474ecf-c76e-400f-bc72-70c11ab8356b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 19:20:10 crc kubenswrapper[4828]: I1205 19:20:10.845118 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-h2d97"] Dec 05 19:20:10 crc kubenswrapper[4828]: E1205 19:20:10.846416 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sdlqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-h2d97_openstack-operators(13474ecf-c76e-400f-bc72-70c11ab8356b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 19:20:10 crc kubenswrapper[4828]: E1205 19:20:10.847568 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h2d97" podUID="13474ecf-c76e-400f-bc72-70c11ab8356b" Dec 05 19:20:10 crc kubenswrapper[4828]: I1205 19:20:10.852518 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-l6gtp"] Dec 05 19:20:10 crc kubenswrapper[4828]: W1205 19:20:10.859966 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod757d5884_94d5_45f1_ae2c_49fd93ce512c.slice/crio-369cacb69e8982356dc9ae811d943e53f5e6e88b0dc526eb703875b3a1dc32bc WatchSource:0}: Error finding container 369cacb69e8982356dc9ae811d943e53f5e6e88b0dc526eb703875b3a1dc32bc: Status 404 returned error can't find the container with id 369cacb69e8982356dc9ae811d943e53f5e6e88b0dc526eb703875b3a1dc32bc Dec 05 19:20:10 crc kubenswrapper[4828]: E1205 19:20:10.863290 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gprx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-l6gtp_openstack-operators(757d5884-94d5-45f1-ae2c-49fd93ce512c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 19:20:10 crc kubenswrapper[4828]: E1205 19:20:10.865207 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gprx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-l6gtp_openstack-operators(757d5884-94d5-45f1-ae2c-49fd93ce512c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 19:20:10 crc kubenswrapper[4828]: E1205 19:20:10.866975 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-l6gtp" podUID="757d5884-94d5-45f1-ae2c-49fd93ce512c" Dec 05 19:20:10 crc kubenswrapper[4828]: W1205 19:20:10.869594 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48135908_b8f6_47ab_aeb7_3f74bb3e2cde.slice/crio-63f39c4e3d8cf1c6b013c1073ab0b4cba80070d83118529af7621b444148e7af WatchSource:0}: Error finding container 63f39c4e3d8cf1c6b013c1073ab0b4cba80070d83118529af7621b444148e7af: Status 404 returned error can't find the container with id 63f39c4e3d8cf1c6b013c1073ab0b4cba80070d83118529af7621b444148e7af Dec 05 19:20:10 crc kubenswrapper[4828]: E1205 19:20:10.876075 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ccpjp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-6xg2c_openstack-operators(48135908-b8f6-47ab-aeb7-3f74bb3e2cde): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 19:20:10 crc kubenswrapper[4828]: E1205 19:20:10.880107 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ccpjp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-6xg2c_openstack-operators(48135908-b8f6-47ab-aeb7-3f74bb3e2cde): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 19:20:10 crc kubenswrapper[4828]: E1205 19:20:10.881208 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6xg2c" podUID="48135908-b8f6-47ab-aeb7-3f74bb3e2cde" Dec 05 19:20:10 crc kubenswrapper[4828]: E1205 19:20:10.881397 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fp9tw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-qdslg_openstack-operators(bf305ed3-e27f-42bc-9fb7-bec903ca820f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 19:20:10 crc kubenswrapper[4828]: E1205 19:20:10.883548 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fp9tw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-qdslg_openstack-operators(bf305ed3-e27f-42bc-9fb7-bec903ca820f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 19:20:10 crc kubenswrapper[4828]: E1205 19:20:10.884770 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-qdslg" podUID="bf305ed3-e27f-42bc-9fb7-bec903ca820f" Dec 05 19:20:11 crc kubenswrapper[4828]: I1205 19:20:11.021316 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ce74c6c-ee96-4712-983f-4090e176f31e-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j\" (UID: \"1ce74c6c-ee96-4712-983f-4090e176f31e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j" Dec 05 19:20:11 crc kubenswrapper[4828]: E1205 19:20:11.021479 4828 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 19:20:11 crc kubenswrapper[4828]: E1205 19:20:11.021523 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1ce74c6c-ee96-4712-983f-4090e176f31e-cert podName:1ce74c6c-ee96-4712-983f-4090e176f31e nodeName:}" failed. No retries permitted until 2025-12-05 19:20:13.021509657 +0000 UTC m=+990.916731963 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1ce74c6c-ee96-4712-983f-4090e176f31e-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j" (UID: "1ce74c6c-ee96-4712-983f-4090e176f31e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 19:20:11 crc kubenswrapper[4828]: I1205 19:20:11.194497 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-77twz" event={"ID":"c6d11d68-9609-432a-a855-4789df83739d","Type":"ContainerStarted","Data":"fe7cb05242428615b8a04a16bb5ca684c93c0b6c1775c74e5476b44652e0e4c2"} Dec 05 19:20:11 crc kubenswrapper[4828]: I1205 19:20:11.195431 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-hdxm9" event={"ID":"8c1110f4-40af-416e-9624-22a901897000","Type":"ContainerStarted","Data":"a7be6aea7d019621d27dd2339f3774df0bd0a67c6f77208fd57db635f5695d8c"} Dec 05 19:20:11 crc kubenswrapper[4828]: I1205 19:20:11.201504 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-4gr5g" event={"ID":"ba58375c-b3fa-4eb8-8813-c55f003674ca","Type":"ContainerStarted","Data":"4cd36b8556bdca2dc3347c90b61535377b25d27fb880e21cdb36051cac8012a7"} Dec 05 19:20:11 crc kubenswrapper[4828]: E1205 19:20:11.203959 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-4gr5g" podUID="ba58375c-b3fa-4eb8-8813-c55f003674ca" Dec 05 19:20:11 crc kubenswrapper[4828]: I1205 19:20:11.204406 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-gsczh" event={"ID":"a03904e7-57be-4491-b11d-c8e698b718e6","Type":"ContainerStarted","Data":"b8b66312f518a28c30493e513a0e234e5b07d819297d06d9f91314bfa699d801"} Dec 05 19:20:11 crc kubenswrapper[4828]: E1205 19:20:11.208541 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-hdxm9" podUID="8c1110f4-40af-416e-9624-22a901897000" Dec 05 19:20:11 crc kubenswrapper[4828]: I1205 19:20:11.210149 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-pbmt2" event={"ID":"1e335b54-f84b-4d91-a58e-0348728d171e","Type":"ContainerStarted","Data":"ee54094c0d012cb0897b04b4399793cd4b9948210bc78bbcdd8f55d3f17c0988"} Dec 05 19:20:11 crc kubenswrapper[4828]: I1205 19:20:11.212049 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w4rgq" event={"ID":"dabc71c3-947a-4d4c-90bd-b5bb473ce013","Type":"ContainerStarted","Data":"461a4451f536214fba1d169e2fbcfbc29a50e88c4769d669f980ebadb6f35067"} Dec 05 19:20:11 crc kubenswrapper[4828]: I1205 19:20:11.218193 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-qdslg" event={"ID":"bf305ed3-e27f-42bc-9fb7-bec903ca820f","Type":"ContainerStarted","Data":"a5d2e25df3552a7c091c8b24eeed7a547eefdfd93ff595bc0a9c9986a50019d5"} Dec 05 19:20:11 crc kubenswrapper[4828]: E1205 19:20:11.227722 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-qdslg" podUID="bf305ed3-e27f-42bc-9fb7-bec903ca820f" Dec 05 19:20:11 crc kubenswrapper[4828]: I1205 19:20:11.229048 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-cfnbh" event={"ID":"a5d6b211-6f88-45fe-8e38-608271465dfe","Type":"ContainerStarted","Data":"031ce4c2e48127f23845373a677b4794e2d8e2549323a2651801bafbc3d4e713"} Dec 05 19:20:11 crc kubenswrapper[4828]: I1205 19:20:11.238203 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6xg2c" event={"ID":"48135908-b8f6-47ab-aeb7-3f74bb3e2cde","Type":"ContainerStarted","Data":"63f39c4e3d8cf1c6b013c1073ab0b4cba80070d83118529af7621b444148e7af"} Dec 05 19:20:11 crc kubenswrapper[4828]: E1205 19:20:11.240383 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6xg2c" podUID="48135908-b8f6-47ab-aeb7-3f74bb3e2cde" Dec 05 19:20:11 crc kubenswrapper[4828]: I1205 19:20:11.240966 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-9jbwm" event={"ID":"04671bff-8616-471f-bd46-21e6b17227eb","Type":"ContainerStarted","Data":"57bcd149a46e82444d9ec34b7d37bbf1b324ba74418bd712f6b5f5cb8cff0d51"} Dec 05 19:20:11 crc kubenswrapper[4828]: I1205 19:20:11.250423 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-g2wd4" event={"ID":"7dbe4cda-8493-4e63-9544-7dfff2495c65","Type":"ContainerStarted","Data":"e660edcd0e20536ffc27f58a628a07fd572212574a81bc20c6f677f52740618e"} Dec 05 19:20:11 crc kubenswrapper[4828]: I1205 19:20:11.254385 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-cf5gg" event={"ID":"cd2986fb-f299-446c-85b7-28427df0ca51","Type":"ContainerStarted","Data":"8c77e62ef8876abc48abe183a3ee616fa99d416732af78886231cbb31151b0f2"} Dec 05 19:20:11 crc kubenswrapper[4828]: I1205 19:20:11.261668 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-l6gtp" event={"ID":"757d5884-94d5-45f1-ae2c-49fd93ce512c","Type":"ContainerStarted","Data":"369cacb69e8982356dc9ae811d943e53f5e6e88b0dc526eb703875b3a1dc32bc"} Dec 05 19:20:11 crc kubenswrapper[4828]: I1205 19:20:11.269921 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h2d97" event={"ID":"13474ecf-c76e-400f-bc72-70c11ab8356b","Type":"ContainerStarted","Data":"f8ff8a6176afce0dc3784d5dd22e5da67f16763081e8e7df11adb4af67f9ff78"} Dec 05 19:20:11 crc kubenswrapper[4828]: I1205 19:20:11.271835 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-v92pz" event={"ID":"a27719f3-1ce1-4a2b-876f-f280966f8e8c","Type":"ContainerStarted","Data":"7bd37c140236f6c1c35cba7768d51f7336d68593ba5bc568315c32b5e4c3ed68"} Dec 05 19:20:11 crc kubenswrapper[4828]: E1205 19:20:11.272530 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-l6gtp" podUID="757d5884-94d5-45f1-ae2c-49fd93ce512c" Dec 05 19:20:11 crc kubenswrapper[4828]: E1205 19:20:11.273162 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h2d97" podUID="13474ecf-c76e-400f-bc72-70c11ab8356b" Dec 05 19:20:11 crc kubenswrapper[4828]: I1205 19:20:11.273787 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-k7qf5" event={"ID":"f5bca056-89ff-4e36-82b7-ad44d9dc00d6","Type":"ContainerStarted","Data":"38908b6c137d6138446ab05b34855d6786263ec772a06c8262aa91044b5240d4"} Dec 05 19:20:11 crc kubenswrapper[4828]: I1205 19:20:11.279320 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cgkv6" event={"ID":"3b18c18d-624d-4d50-95ba-a4f755f74936","Type":"ContainerStarted","Data":"ff48eadcb30cca42620228c1b7663349b30aa7f4ec7c319521704045d0d8702f"} Dec 05 19:20:11 crc kubenswrapper[4828]: I1205 19:20:11.285845 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-cr94b" event={"ID":"16bfe264-a5d1-433e-93ee-c6821e882c4c","Type":"ContainerStarted","Data":"6b722807ed614c00efbe206ea9120816bccdd20655b90293c4aa3559480b4899"} Dec 05 19:20:11 crc kubenswrapper[4828]: I1205 19:20:11.328470 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-metrics-certs\") pod \"openstack-operator-controller-manager-d5958f94b-76zjx\" (UID: \"408ecf49-524f-4743-9cef-5c65877dd176\") " pod="openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx" Dec 05 19:20:11 crc kubenswrapper[4828]: I1205 19:20:11.328545 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-webhook-certs\") pod \"openstack-operator-controller-manager-d5958f94b-76zjx\" (UID: \"408ecf49-524f-4743-9cef-5c65877dd176\") " pod="openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx" Dec 05 19:20:11 crc kubenswrapper[4828]: E1205 19:20:11.328690 4828 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 05 19:20:11 crc kubenswrapper[4828]: E1205 19:20:11.328734 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-webhook-certs podName:408ecf49-524f-4743-9cef-5c65877dd176 nodeName:}" failed. No retries permitted until 2025-12-05 19:20:13.328720525 +0000 UTC m=+991.223942831 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-webhook-certs") pod "openstack-operator-controller-manager-d5958f94b-76zjx" (UID: "408ecf49-524f-4743-9cef-5c65877dd176") : secret "webhook-server-cert" not found Dec 05 19:20:11 crc kubenswrapper[4828]: E1205 19:20:11.329320 4828 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 05 19:20:11 crc kubenswrapper[4828]: E1205 19:20:11.329370 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-metrics-certs podName:408ecf49-524f-4743-9cef-5c65877dd176 nodeName:}" failed. No retries permitted until 2025-12-05 19:20:13.329354281 +0000 UTC m=+991.224576577 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-metrics-certs") pod "openstack-operator-controller-manager-d5958f94b-76zjx" (UID: "408ecf49-524f-4743-9cef-5c65877dd176") : secret "metrics-server-cert" not found Dec 05 19:20:12 crc kubenswrapper[4828]: E1205 19:20:12.330258 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h2d97" podUID="13474ecf-c76e-400f-bc72-70c11ab8356b" Dec 05 19:20:12 crc kubenswrapper[4828]: E1205 19:20:12.335072 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-hdxm9" podUID="8c1110f4-40af-416e-9624-22a901897000" Dec 05 19:20:12 crc kubenswrapper[4828]: E1205 19:20:12.335183 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-l6gtp" podUID="757d5884-94d5-45f1-ae2c-49fd93ce512c" Dec 05 19:20:12 crc kubenswrapper[4828]: E1205 19:20:12.335237 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-qdslg" podUID="bf305ed3-e27f-42bc-9fb7-bec903ca820f" Dec 05 19:20:12 crc kubenswrapper[4828]: E1205 19:20:12.335280 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6xg2c" podUID="48135908-b8f6-47ab-aeb7-3f74bb3e2cde" Dec 05 19:20:12 crc kubenswrapper[4828]: E1205 19:20:12.335321 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-4gr5g" podUID="ba58375c-b3fa-4eb8-8813-c55f003674ca" Dec 05 19:20:12 crc kubenswrapper[4828]: I1205 19:20:12.760035 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/03c4fc5d-6be1-47b4-9c39-7bb86046dafd-cert\") pod \"infra-operator-controller-manager-575477cdfc-lrhm5\" (UID: \"03c4fc5d-6be1-47b4-9c39-7bb86046dafd\") " pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:20:12 crc kubenswrapper[4828]: E1205 19:20:12.760207 4828 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 05 19:20:12 crc kubenswrapper[4828]: E1205 19:20:12.760259 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03c4fc5d-6be1-47b4-9c39-7bb86046dafd-cert podName:03c4fc5d-6be1-47b4-9c39-7bb86046dafd nodeName:}" failed. No retries permitted until 2025-12-05 19:20:16.760245231 +0000 UTC m=+994.655467537 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/03c4fc5d-6be1-47b4-9c39-7bb86046dafd-cert") pod "infra-operator-controller-manager-575477cdfc-lrhm5" (UID: "03c4fc5d-6be1-47b4-9c39-7bb86046dafd") : secret "infra-operator-webhook-server-cert" not found Dec 05 19:20:13 crc kubenswrapper[4828]: I1205 19:20:13.063901 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ce74c6c-ee96-4712-983f-4090e176f31e-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j\" (UID: \"1ce74c6c-ee96-4712-983f-4090e176f31e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j" Dec 05 19:20:13 crc kubenswrapper[4828]: E1205 19:20:13.064056 4828 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 19:20:13 crc kubenswrapper[4828]: E1205 19:20:13.064114 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1ce74c6c-ee96-4712-983f-4090e176f31e-cert podName:1ce74c6c-ee96-4712-983f-4090e176f31e nodeName:}" failed. No retries permitted until 2025-12-05 19:20:17.064097238 +0000 UTC m=+994.959319554 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1ce74c6c-ee96-4712-983f-4090e176f31e-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j" (UID: "1ce74c6c-ee96-4712-983f-4090e176f31e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 19:20:13 crc kubenswrapper[4828]: I1205 19:20:13.368250 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-metrics-certs\") pod \"openstack-operator-controller-manager-d5958f94b-76zjx\" (UID: \"408ecf49-524f-4743-9cef-5c65877dd176\") " pod="openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx" Dec 05 19:20:13 crc kubenswrapper[4828]: I1205 19:20:13.368356 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-webhook-certs\") pod \"openstack-operator-controller-manager-d5958f94b-76zjx\" (UID: \"408ecf49-524f-4743-9cef-5c65877dd176\") " pod="openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx" Dec 05 19:20:13 crc kubenswrapper[4828]: E1205 19:20:13.368482 4828 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 05 19:20:13 crc kubenswrapper[4828]: E1205 19:20:13.368489 4828 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 05 19:20:13 crc kubenswrapper[4828]: E1205 19:20:13.368547 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-webhook-certs podName:408ecf49-524f-4743-9cef-5c65877dd176 nodeName:}" failed. No retries permitted until 2025-12-05 19:20:17.36852784 +0000 UTC m=+995.263750146 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-webhook-certs") pod "openstack-operator-controller-manager-d5958f94b-76zjx" (UID: "408ecf49-524f-4743-9cef-5c65877dd176") : secret "webhook-server-cert" not found Dec 05 19:20:13 crc kubenswrapper[4828]: E1205 19:20:13.368565 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-metrics-certs podName:408ecf49-524f-4743-9cef-5c65877dd176 nodeName:}" failed. No retries permitted until 2025-12-05 19:20:17.368558511 +0000 UTC m=+995.263780817 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-metrics-certs") pod "openstack-operator-controller-manager-d5958f94b-76zjx" (UID: "408ecf49-524f-4743-9cef-5c65877dd176") : secret "metrics-server-cert" not found Dec 05 19:20:13 crc kubenswrapper[4828]: I1205 19:20:13.869836 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lcnvv"] Dec 05 19:20:13 crc kubenswrapper[4828]: I1205 19:20:13.871517 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcnvv" Dec 05 19:20:13 crc kubenswrapper[4828]: I1205 19:20:13.884917 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcnvv"] Dec 05 19:20:13 crc kubenswrapper[4828]: I1205 19:20:13.981707 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64594188-f4e0-4d0b-a5f9-e118634e82cb-utilities\") pod \"redhat-marketplace-lcnvv\" (UID: \"64594188-f4e0-4d0b-a5f9-e118634e82cb\") " pod="openshift-marketplace/redhat-marketplace-lcnvv" Dec 05 19:20:13 crc kubenswrapper[4828]: I1205 19:20:13.981801 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64594188-f4e0-4d0b-a5f9-e118634e82cb-catalog-content\") pod \"redhat-marketplace-lcnvv\" (UID: \"64594188-f4e0-4d0b-a5f9-e118634e82cb\") " pod="openshift-marketplace/redhat-marketplace-lcnvv" Dec 05 19:20:13 crc kubenswrapper[4828]: I1205 19:20:13.981834 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wznr\" (UniqueName: \"kubernetes.io/projected/64594188-f4e0-4d0b-a5f9-e118634e82cb-kube-api-access-5wznr\") pod \"redhat-marketplace-lcnvv\" (UID: \"64594188-f4e0-4d0b-a5f9-e118634e82cb\") " pod="openshift-marketplace/redhat-marketplace-lcnvv" Dec 05 19:20:14 crc kubenswrapper[4828]: I1205 19:20:14.083289 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64594188-f4e0-4d0b-a5f9-e118634e82cb-catalog-content\") pod \"redhat-marketplace-lcnvv\" (UID: \"64594188-f4e0-4d0b-a5f9-e118634e82cb\") " pod="openshift-marketplace/redhat-marketplace-lcnvv" Dec 05 19:20:14 crc kubenswrapper[4828]: I1205 19:20:14.083343 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wznr\" (UniqueName: \"kubernetes.io/projected/64594188-f4e0-4d0b-a5f9-e118634e82cb-kube-api-access-5wznr\") pod \"redhat-marketplace-lcnvv\" (UID: \"64594188-f4e0-4d0b-a5f9-e118634e82cb\") " pod="openshift-marketplace/redhat-marketplace-lcnvv" Dec 05 19:20:14 crc kubenswrapper[4828]: I1205 19:20:14.083463 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64594188-f4e0-4d0b-a5f9-e118634e82cb-utilities\") pod \"redhat-marketplace-lcnvv\" (UID: \"64594188-f4e0-4d0b-a5f9-e118634e82cb\") " pod="openshift-marketplace/redhat-marketplace-lcnvv" Dec 05 19:20:14 crc kubenswrapper[4828]: I1205 19:20:14.083887 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64594188-f4e0-4d0b-a5f9-e118634e82cb-catalog-content\") pod \"redhat-marketplace-lcnvv\" (UID: \"64594188-f4e0-4d0b-a5f9-e118634e82cb\") " pod="openshift-marketplace/redhat-marketplace-lcnvv" Dec 05 19:20:14 crc kubenswrapper[4828]: I1205 19:20:14.083964 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64594188-f4e0-4d0b-a5f9-e118634e82cb-utilities\") pod \"redhat-marketplace-lcnvv\" (UID: \"64594188-f4e0-4d0b-a5f9-e118634e82cb\") " pod="openshift-marketplace/redhat-marketplace-lcnvv" Dec 05 19:20:14 crc kubenswrapper[4828]: I1205 19:20:14.114930 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wznr\" (UniqueName: \"kubernetes.io/projected/64594188-f4e0-4d0b-a5f9-e118634e82cb-kube-api-access-5wznr\") pod \"redhat-marketplace-lcnvv\" (UID: \"64594188-f4e0-4d0b-a5f9-e118634e82cb\") " pod="openshift-marketplace/redhat-marketplace-lcnvv" Dec 05 19:20:14 crc kubenswrapper[4828]: I1205 19:20:14.255421 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcnvv" Dec 05 19:20:16 crc kubenswrapper[4828]: I1205 19:20:16.822729 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/03c4fc5d-6be1-47b4-9c39-7bb86046dafd-cert\") pod \"infra-operator-controller-manager-575477cdfc-lrhm5\" (UID: \"03c4fc5d-6be1-47b4-9c39-7bb86046dafd\") " pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:20:16 crc kubenswrapper[4828]: E1205 19:20:16.823473 4828 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 05 19:20:16 crc kubenswrapper[4828]: E1205 19:20:16.823521 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03c4fc5d-6be1-47b4-9c39-7bb86046dafd-cert podName:03c4fc5d-6be1-47b4-9c39-7bb86046dafd nodeName:}" failed. No retries permitted until 2025-12-05 19:20:24.823506004 +0000 UTC m=+1002.718728310 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/03c4fc5d-6be1-47b4-9c39-7bb86046dafd-cert") pod "infra-operator-controller-manager-575477cdfc-lrhm5" (UID: "03c4fc5d-6be1-47b4-9c39-7bb86046dafd") : secret "infra-operator-webhook-server-cert" not found Dec 05 19:20:17 crc kubenswrapper[4828]: I1205 19:20:17.127824 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ce74c6c-ee96-4712-983f-4090e176f31e-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j\" (UID: \"1ce74c6c-ee96-4712-983f-4090e176f31e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j" Dec 05 19:20:17 crc kubenswrapper[4828]: E1205 19:20:17.127972 4828 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 19:20:17 crc kubenswrapper[4828]: E1205 19:20:17.128017 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1ce74c6c-ee96-4712-983f-4090e176f31e-cert podName:1ce74c6c-ee96-4712-983f-4090e176f31e nodeName:}" failed. No retries permitted until 2025-12-05 19:20:25.128003949 +0000 UTC m=+1003.023226245 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1ce74c6c-ee96-4712-983f-4090e176f31e-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j" (UID: "1ce74c6c-ee96-4712-983f-4090e176f31e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 19:20:17 crc kubenswrapper[4828]: I1205 19:20:17.431707 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-metrics-certs\") pod \"openstack-operator-controller-manager-d5958f94b-76zjx\" (UID: \"408ecf49-524f-4743-9cef-5c65877dd176\") " pod="openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx" Dec 05 19:20:17 crc kubenswrapper[4828]: I1205 19:20:17.431782 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-webhook-certs\") pod \"openstack-operator-controller-manager-d5958f94b-76zjx\" (UID: \"408ecf49-524f-4743-9cef-5c65877dd176\") " pod="openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx" Dec 05 19:20:17 crc kubenswrapper[4828]: E1205 19:20:17.431900 4828 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 05 19:20:17 crc kubenswrapper[4828]: E1205 19:20:17.431901 4828 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 05 19:20:17 crc kubenswrapper[4828]: E1205 19:20:17.431966 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-metrics-certs podName:408ecf49-524f-4743-9cef-5c65877dd176 nodeName:}" failed. No retries permitted until 2025-12-05 19:20:25.431946798 +0000 UTC m=+1003.327169124 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-metrics-certs") pod "openstack-operator-controller-manager-d5958f94b-76zjx" (UID: "408ecf49-524f-4743-9cef-5c65877dd176") : secret "metrics-server-cert" not found Dec 05 19:20:17 crc kubenswrapper[4828]: E1205 19:20:17.432007 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-webhook-certs podName:408ecf49-524f-4743-9cef-5c65877dd176 nodeName:}" failed. No retries permitted until 2025-12-05 19:20:25.431976878 +0000 UTC m=+1003.327199204 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-webhook-certs") pod "openstack-operator-controller-manager-d5958f94b-76zjx" (UID: "408ecf49-524f-4743-9cef-5c65877dd176") : secret "webhook-server-cert" not found Dec 05 19:20:24 crc kubenswrapper[4828]: E1205 19:20:24.345102 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" Dec 05 19:20:24 crc kubenswrapper[4828]: E1205 19:20:24.346101 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p8fd5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-cfnbh_openstack-operators(a5d6b211-6f88-45fe-8e38-608271465dfe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 19:20:24 crc kubenswrapper[4828]: I1205 19:20:24.848043 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/03c4fc5d-6be1-47b4-9c39-7bb86046dafd-cert\") pod \"infra-operator-controller-manager-575477cdfc-lrhm5\" (UID: \"03c4fc5d-6be1-47b4-9c39-7bb86046dafd\") " pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:20:24 crc kubenswrapper[4828]: I1205 19:20:24.870411 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/03c4fc5d-6be1-47b4-9c39-7bb86046dafd-cert\") pod \"infra-operator-controller-manager-575477cdfc-lrhm5\" (UID: \"03c4fc5d-6be1-47b4-9c39-7bb86046dafd\") " pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:20:25 crc kubenswrapper[4828]: I1205 19:20:25.117326 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:20:25 crc kubenswrapper[4828]: I1205 19:20:25.152264 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ce74c6c-ee96-4712-983f-4090e176f31e-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j\" (UID: \"1ce74c6c-ee96-4712-983f-4090e176f31e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j" Dec 05 19:20:25 crc kubenswrapper[4828]: I1205 19:20:25.156434 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ce74c6c-ee96-4712-983f-4090e176f31e-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j\" (UID: \"1ce74c6c-ee96-4712-983f-4090e176f31e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j" Dec 05 19:20:25 crc kubenswrapper[4828]: I1205 19:20:25.455718 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j" Dec 05 19:20:25 crc kubenswrapper[4828]: I1205 19:20:25.459641 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-metrics-certs\") pod \"openstack-operator-controller-manager-d5958f94b-76zjx\" (UID: \"408ecf49-524f-4743-9cef-5c65877dd176\") " pod="openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx" Dec 05 19:20:25 crc kubenswrapper[4828]: I1205 19:20:25.459715 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-webhook-certs\") pod \"openstack-operator-controller-manager-d5958f94b-76zjx\" (UID: \"408ecf49-524f-4743-9cef-5c65877dd176\") " pod="openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx" Dec 05 19:20:25 crc kubenswrapper[4828]: E1205 19:20:25.459861 4828 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 05 19:20:25 crc kubenswrapper[4828]: E1205 19:20:25.459936 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-webhook-certs podName:408ecf49-524f-4743-9cef-5c65877dd176 nodeName:}" failed. No retries permitted until 2025-12-05 19:20:41.459918134 +0000 UTC m=+1019.355140440 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-webhook-certs") pod "openstack-operator-controller-manager-d5958f94b-76zjx" (UID: "408ecf49-524f-4743-9cef-5c65877dd176") : secret "webhook-server-cert" not found Dec 05 19:20:25 crc kubenswrapper[4828]: E1205 19:20:25.460305 4828 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 05 19:20:25 crc kubenswrapper[4828]: E1205 19:20:25.460340 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-metrics-certs podName:408ecf49-524f-4743-9cef-5c65877dd176 nodeName:}" failed. No retries permitted until 2025-12-05 19:20:41.460330435 +0000 UTC m=+1019.355552731 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-metrics-certs") pod "openstack-operator-controller-manager-d5958f94b-76zjx" (UID: "408ecf49-524f-4743-9cef-5c65877dd176") : secret "metrics-server-cert" not found Dec 05 19:20:25 crc kubenswrapper[4828]: E1205 19:20:25.581947 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" Dec 05 19:20:25 crc kubenswrapper[4828]: E1205 19:20:25.582123 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mkpf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-cf5gg_openstack-operators(cd2986fb-f299-446c-85b7-28427df0ca51): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 19:20:26 crc kubenswrapper[4828]: E1205 19:20:26.785126 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5" Dec 05 19:20:26 crc kubenswrapper[4828]: E1205 19:20:26.785349 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b2vwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c6d99b8f-g2wd4_openstack-operators(7dbe4cda-8493-4e63-9544-7dfff2495c65): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 19:20:27 crc kubenswrapper[4828]: E1205 19:20:27.520802 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" Dec 05 19:20:27 crc kubenswrapper[4828]: E1205 19:20:27.521255 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nl65t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-gsczh_openstack-operators(a03904e7-57be-4491-b11d-c8e698b718e6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 19:20:28 crc kubenswrapper[4828]: E1205 19:20:28.773137 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:9f68d7bc8c6bce38f46dee8a8272d5365c49fe7b32b2af52e8ac884e212f3a85" Dec 05 19:20:28 crc kubenswrapper[4828]: E1205 19:20:28.773298 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:9f68d7bc8c6bce38f46dee8a8272d5365c49fe7b32b2af52e8ac884e212f3a85,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mh7nl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-78b4bc895b-cr94b_openstack-operators(16bfe264-a5d1-433e-93ee-c6821e882c4c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 19:20:29 crc kubenswrapper[4828]: E1205 19:20:29.313549 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9" Dec 05 19:20:29 crc kubenswrapper[4828]: E1205 19:20:29.313799 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5lt9x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7c79b5df47-cgkv6_openstack-operators(3b18c18d-624d-4d50-95ba-a4f755f74936): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 19:20:29 crc kubenswrapper[4828]: E1205 19:20:29.893857 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809" Dec 05 19:20:29 crc kubenswrapper[4828]: E1205 19:20:29.894364 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z7sxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987cd8cd-v92pz_openstack-operators(a27719f3-1ce1-4a2b-876f-f280966f8e8c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 19:20:30 crc kubenswrapper[4828]: E1205 19:20:30.417818 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea" Dec 05 19:20:30 crc kubenswrapper[4828]: E1205 19:20:30.418034 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-27bgq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7d9dfd778-jbg6n_openstack-operators(1f1ef15a-9832-4ee5-8077-066329f6180a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 19:20:37 crc kubenswrapper[4828]: E1205 19:20:37.408169 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" Dec 05 19:20:37 crc kubenswrapper[4828]: E1205 19:20:37.409019 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vkmk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-pbmt2_openstack-operators(1e335b54-f84b-4d91-a58e-0348728d171e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 19:20:37 crc kubenswrapper[4828]: I1205 19:20:37.411396 4828 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 19:20:37 crc kubenswrapper[4828]: E1205 19:20:37.869062 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Dec 05 19:20:37 crc kubenswrapper[4828]: E1205 19:20:37.869243 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4j5hm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-w4rgq_openstack-operators(dabc71c3-947a-4d4c-90bd-b5bb473ce013): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 19:20:37 crc kubenswrapper[4828]: E1205 19:20:37.870420 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w4rgq" podUID="dabc71c3-947a-4d4c-90bd-b5bb473ce013" Dec 05 19:20:38 crc kubenswrapper[4828]: I1205 19:20:38.334044 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5"] Dec 05 19:20:38 crc kubenswrapper[4828]: E1205 19:20:38.534310 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w4rgq" podUID="dabc71c3-947a-4d4c-90bd-b5bb473ce013" Dec 05 19:20:38 crc kubenswrapper[4828]: I1205 19:20:38.934238 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcnvv"] Dec 05 19:20:38 crc kubenswrapper[4828]: I1205 19:20:38.988220 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j"] Dec 05 19:20:39 crc kubenswrapper[4828]: W1205 19:20:39.009504 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64594188_f4e0_4d0b_a5f9_e118634e82cb.slice/crio-2b14f5cfc1eec376beacf4df3e0b786b22330f1e5f4c27ed83fc99a358d90abb WatchSource:0}: Error finding container 2b14f5cfc1eec376beacf4df3e0b786b22330f1e5f4c27ed83fc99a358d90abb: Status 404 returned error can't find the container with id 2b14f5cfc1eec376beacf4df3e0b786b22330f1e5f4c27ed83fc99a358d90abb Dec 05 19:20:39 crc kubenswrapper[4828]: W1205 19:20:39.009886 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ce74c6c_ee96_4712_983f_4090e176f31e.slice/crio-481d63c92fab4492ae96e2514b29c767229ebd68cbdcf724e27855fded6f8d23 WatchSource:0}: Error finding container 481d63c92fab4492ae96e2514b29c767229ebd68cbdcf724e27855fded6f8d23: Status 404 returned error can't find the container with id 481d63c92fab4492ae96e2514b29c767229ebd68cbdcf724e27855fded6f8d23 Dec 05 19:20:39 crc kubenswrapper[4828]: I1205 19:20:39.537640 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-77twz" event={"ID":"c6d11d68-9609-432a-a855-4789df83739d","Type":"ContainerStarted","Data":"a046e07f09275327028acb272e00c512163e8d0380382005975967a8fba1c9b7"} Dec 05 19:20:39 crc kubenswrapper[4828]: I1205 19:20:39.541997 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j" event={"ID":"1ce74c6c-ee96-4712-983f-4090e176f31e","Type":"ContainerStarted","Data":"481d63c92fab4492ae96e2514b29c767229ebd68cbdcf724e27855fded6f8d23"} Dec 05 19:20:39 crc kubenswrapper[4828]: I1205 19:20:39.544501 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-k7qf5" event={"ID":"f5bca056-89ff-4e36-82b7-ad44d9dc00d6","Type":"ContainerStarted","Data":"fa68506573ade8e510af45cf561dd8464b59ddf70987f36ccf7ecce68098e7a2"} Dec 05 19:20:39 crc kubenswrapper[4828]: I1205 19:20:39.547963 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-9jbwm" event={"ID":"04671bff-8616-471f-bd46-21e6b17227eb","Type":"ContainerStarted","Data":"6b3c1df2e71adc7b6821c157e307441a90d9d0b75af6e039347f74bbee5a6173"} Dec 05 19:20:39 crc kubenswrapper[4828]: I1205 19:20:39.550103 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qftqg" event={"ID":"4276bd34-acab-4936-a044-7d00e33e806f","Type":"ContainerStarted","Data":"d3952abf971c0701be817a31764aa93a86a4e9e9b845f84ed4676585132b2b13"} Dec 05 19:20:39 crc kubenswrapper[4828]: I1205 19:20:39.551931 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" event={"ID":"03c4fc5d-6be1-47b4-9c39-7bb86046dafd","Type":"ContainerStarted","Data":"71830510651b8f58af5029d60f4f39578f3dd96cd6b3026a8369d3daff0937bd"} Dec 05 19:20:39 crc kubenswrapper[4828]: I1205 19:20:39.552968 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcnvv" event={"ID":"64594188-f4e0-4d0b-a5f9-e118634e82cb","Type":"ContainerStarted","Data":"2b14f5cfc1eec376beacf4df3e0b786b22330f1e5f4c27ed83fc99a358d90abb"} Dec 05 19:20:41 crc kubenswrapper[4828]: I1205 19:20:41.537465 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-metrics-certs\") pod \"openstack-operator-controller-manager-d5958f94b-76zjx\" (UID: \"408ecf49-524f-4743-9cef-5c65877dd176\") " pod="openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx" Dec 05 19:20:41 crc kubenswrapper[4828]: I1205 19:20:41.537889 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-webhook-certs\") pod \"openstack-operator-controller-manager-d5958f94b-76zjx\" (UID: \"408ecf49-524f-4743-9cef-5c65877dd176\") " pod="openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx" Dec 05 19:20:41 crc kubenswrapper[4828]: I1205 19:20:41.543971 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-metrics-certs\") pod \"openstack-operator-controller-manager-d5958f94b-76zjx\" (UID: \"408ecf49-524f-4743-9cef-5c65877dd176\") " pod="openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx" Dec 05 19:20:41 crc kubenswrapper[4828]: I1205 19:20:41.545899 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/408ecf49-524f-4743-9cef-5c65877dd176-webhook-certs\") pod \"openstack-operator-controller-manager-d5958f94b-76zjx\" (UID: \"408ecf49-524f-4743-9cef-5c65877dd176\") " pod="openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx" Dec 05 19:20:41 crc kubenswrapper[4828]: I1205 19:20:41.781740 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx" Dec 05 19:20:42 crc kubenswrapper[4828]: I1205 19:20:42.584200 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h2d97" event={"ID":"13474ecf-c76e-400f-bc72-70c11ab8356b","Type":"ContainerStarted","Data":"b03c7412da07260a8722531fac4de03581089d2b321c696177f68d2beb145c85"} Dec 05 19:20:42 crc kubenswrapper[4828]: I1205 19:20:42.586685 4828 generic.go:334] "Generic (PLEG): container finished" podID="64594188-f4e0-4d0b-a5f9-e118634e82cb" containerID="895c0fba81ad2f311572838c7f47d0bca8b676d05b383f2fcd2b55361440b44d" exitCode=0 Dec 05 19:20:42 crc kubenswrapper[4828]: I1205 19:20:42.586721 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcnvv" event={"ID":"64594188-f4e0-4d0b-a5f9-e118634e82cb","Type":"ContainerDied","Data":"895c0fba81ad2f311572838c7f47d0bca8b676d05b383f2fcd2b55361440b44d"} Dec 05 19:20:43 crc kubenswrapper[4828]: I1205 19:20:43.596939 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-qdslg" event={"ID":"bf305ed3-e27f-42bc-9fb7-bec903ca820f","Type":"ContainerStarted","Data":"743ff82739dd6d245ad5adc9ff9f31a44b8de6dafb916dbd5168dc40b2ff8509"} Dec 05 19:20:44 crc kubenswrapper[4828]: I1205 19:20:44.613632 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6xg2c" event={"ID":"48135908-b8f6-47ab-aeb7-3f74bb3e2cde","Type":"ContainerStarted","Data":"560684a7f8cee287abaf187aae8b8bd49ab0d8d6662f1b45faeddf1024fb2842"} Dec 05 19:20:44 crc kubenswrapper[4828]: I1205 19:20:44.614775 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-4gr5g" event={"ID":"ba58375c-b3fa-4eb8-8813-c55f003674ca","Type":"ContainerStarted","Data":"2416f405ff469a78435319fb69efcef06487103549ae145ffc2296b45a00fe80"} Dec 05 19:20:44 crc kubenswrapper[4828]: I1205 19:20:44.615801 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-hdxm9" event={"ID":"8c1110f4-40af-416e-9624-22a901897000","Type":"ContainerStarted","Data":"2a08982d5828faf49f2b7c71f8ed0405c9cba205b34422ae204d0e6afc14903a"} Dec 05 19:20:57 crc kubenswrapper[4828]: I1205 19:20:57.723371 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-l6gtp" event={"ID":"757d5884-94d5-45f1-ae2c-49fd93ce512c","Type":"ContainerStarted","Data":"e19c50b8a911f98e8278b88c043e60124732fe1d8ce7e868ee161db35479269c"} Dec 05 19:20:59 crc kubenswrapper[4828]: E1205 19:20:59.077980 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81" Dec 05 19:20:59 crc kubenswrapper[4828]: E1205 19:20:59.079243 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dvb9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j_openstack-operators(1ce74c6c-ee96-4712-983f-4090e176f31e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 19:21:01 crc kubenswrapper[4828]: E1205 19:21:01.339333 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 05 19:21:01 crc kubenswrapper[4828]: E1205 19:21:01.339699 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mclzg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-859b6ccc6-qftqg_openstack-operators(4276bd34-acab-4936-a044-7d00e33e806f): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 05 19:21:01 crc kubenswrapper[4828]: E1205 19:21:01.341155 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qftqg" podUID="4276bd34-acab-4936-a044-7d00e33e806f" Dec 05 19:21:01 crc kubenswrapper[4828]: E1205 19:21:01.363014 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 05 19:21:01 crc kubenswrapper[4828]: E1205 19:21:01.363237 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5wznr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lcnvv_openshift-marketplace(64594188-f4e0-4d0b-a5f9-e118634e82cb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 19:21:01 crc kubenswrapper[4828]: E1205 19:21:01.363763 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 05 19:21:01 crc kubenswrapper[4828]: E1205 19:21:01.364009 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ck5ss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-k7qf5_openstack-operators(f5bca056-89ff-4e36-82b7-ad44d9dc00d6): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 05 19:21:01 crc kubenswrapper[4828]: E1205 19:21:01.365153 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-k7qf5" podUID="f5bca056-89ff-4e36-82b7-ad44d9dc00d6" Dec 05 19:21:01 crc kubenswrapper[4828]: E1205 19:21:01.365213 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-lcnvv" podUID="64594188-f4e0-4d0b-a5f9-e118634e82cb" Dec 05 19:21:01 crc kubenswrapper[4828]: E1205 19:21:01.405608 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 05 19:21:01 crc kubenswrapper[4828]: E1205 19:21:01.406007 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 05 19:21:01 crc kubenswrapper[4828]: E1205 19:21:01.406349 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sdlqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-h2d97_openstack-operators(13474ecf-c76e-400f-bc72-70c11ab8356b): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 05 19:21:01 crc kubenswrapper[4828]: E1205 19:21:01.406593 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j5qnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6c548fd776-9jbwm_openstack-operators(04671bff-8616-471f-bd46-21e6b17227eb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 19:21:01 crc kubenswrapper[4828]: E1205 19:21:01.406666 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 05 19:21:01 crc kubenswrapper[4828]: E1205 19:21:01.406838 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fp9tw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-qdslg_openstack-operators(bf305ed3-e27f-42bc-9fb7-bec903ca820f): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 05 19:21:01 crc kubenswrapper[4828]: E1205 19:21:01.407506 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h2d97" podUID="13474ecf-c76e-400f-bc72-70c11ab8356b" Dec 05 19:21:01 crc kubenswrapper[4828]: E1205 19:21:01.407563 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 05 19:21:01 crc kubenswrapper[4828]: E1205 19:21:01.407638 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lj424,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-56bbcc9d85-77twz_openstack-operators(c6d11d68-9609-432a-a855-4789df83739d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 19:21:01 crc kubenswrapper[4828]: E1205 19:21:01.407700 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-9jbwm" podUID="04671bff-8616-471f-bd46-21e6b17227eb" Dec 05 19:21:01 crc kubenswrapper[4828]: E1205 19:21:01.408766 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-qdslg" podUID="bf305ed3-e27f-42bc-9fb7-bec903ca820f" Dec 05 19:21:01 crc kubenswrapper[4828]: E1205 19:21:01.410927 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-77twz" podUID="c6d11d68-9609-432a-a855-4789df83739d" Dec 05 19:21:01 crc kubenswrapper[4828]: I1205 19:21:01.746563 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx"] Dec 05 19:21:01 crc kubenswrapper[4828]: I1205 19:21:01.758162 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-77twz" Dec 05 19:21:01 crc kubenswrapper[4828]: I1205 19:21:01.758729 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h2d97" Dec 05 19:21:01 crc kubenswrapper[4828]: I1205 19:21:01.759762 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-77twz" Dec 05 19:21:01 crc kubenswrapper[4828]: I1205 19:21:01.760480 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h2d97" Dec 05 19:21:01 crc kubenswrapper[4828]: W1205 19:21:01.767093 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod408ecf49_524f_4743_9cef_5c65877dd176.slice/crio-51892ce229aaddd365a2b3fb7c6f8a09f2f46d0a0b6de1ecdef3efe35646bc6b WatchSource:0}: Error finding container 51892ce229aaddd365a2b3fb7c6f8a09f2f46d0a0b6de1ecdef3efe35646bc6b: Status 404 returned error can't find the container with id 51892ce229aaddd365a2b3fb7c6f8a09f2f46d0a0b6de1ecdef3efe35646bc6b Dec 05 19:21:01 crc kubenswrapper[4828]: E1205 19:21:01.788495 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lcnvv" podUID="64594188-f4e0-4d0b-a5f9-e118634e82cb" Dec 05 19:21:01 crc kubenswrapper[4828]: E1205 19:21:01.994706 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-78f8948974-cf5gg" podUID="cd2986fb-f299-446c-85b7-28427df0ca51" Dec 05 19:21:02 crc kubenswrapper[4828]: E1205 19:21:02.011186 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-jbg6n" podUID="1f1ef15a-9832-4ee5-8077-066329f6180a" Dec 05 19:21:02 crc kubenswrapper[4828]: E1205 19:21:02.058125 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j" podUID="1ce74c6c-ee96-4712-983f-4090e176f31e" Dec 05 19:21:02 crc kubenswrapper[4828]: E1205 19:21:02.147666 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-g2wd4" podUID="7dbe4cda-8493-4e63-9544-7dfff2495c65" Dec 05 19:21:02 crc kubenswrapper[4828]: E1205 19:21:02.215327 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-cfnbh" podUID="a5d6b211-6f88-45fe-8e38-608271465dfe" Dec 05 19:21:02 crc kubenswrapper[4828]: E1205 19:21:02.219866 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-cr94b" podUID="16bfe264-a5d1-433e-93ee-c6821e882c4c" Dec 05 19:21:02 crc kubenswrapper[4828]: E1205 19:21:02.353304 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cgkv6" podUID="3b18c18d-624d-4d50-95ba-a4f755f74936" Dec 05 19:21:02 crc kubenswrapper[4828]: E1205 19:21:02.400658 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-pbmt2" podUID="1e335b54-f84b-4d91-a58e-0348728d171e" Dec 05 19:21:02 crc kubenswrapper[4828]: E1205 19:21:02.731685 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-gsczh" podUID="a03904e7-57be-4491-b11d-c8e698b718e6" Dec 05 19:21:02 crc kubenswrapper[4828]: E1205 19:21:02.744309 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-v92pz" podUID="a27719f3-1ce1-4a2b-876f-f280966f8e8c" Dec 05 19:21:02 crc kubenswrapper[4828]: I1205 19:21:02.837117 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w4rgq" event={"ID":"dabc71c3-947a-4d4c-90bd-b5bb473ce013","Type":"ContainerStarted","Data":"e601411b420ca199deb89b331ff64fb91144b463f9fc0ffe499fa7e8321d33a4"} Dec 05 19:21:02 crc kubenswrapper[4828]: I1205 19:21:02.842581 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-77twz" event={"ID":"c6d11d68-9609-432a-a855-4789df83739d","Type":"ContainerStarted","Data":"263eb337cba6d33b2475c835e62dd6e114296fbe7e2336811f538ac82fe9f78c"} Dec 05 19:21:02 crc kubenswrapper[4828]: I1205 19:21:02.865286 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-qdslg" event={"ID":"bf305ed3-e27f-42bc-9fb7-bec903ca820f","Type":"ContainerStarted","Data":"a3a870b767973da8e64f8ca95eb43afd0a99834a7979f4b45839d849a0b762d3"} Dec 05 19:21:02 crc kubenswrapper[4828]: I1205 19:21:02.866340 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-qdslg" Dec 05 19:21:02 crc kubenswrapper[4828]: I1205 19:21:02.871786 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-qdslg" Dec 05 19:21:02 crc kubenswrapper[4828]: I1205 19:21:02.876755 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" event={"ID":"03c4fc5d-6be1-47b4-9c39-7bb86046dafd","Type":"ContainerStarted","Data":"5ae29702b9c693bc225b109d7199f5610a3f177228a2fa0ff8ce44ca6c251dda"} Dec 05 19:21:02 crc kubenswrapper[4828]: I1205 19:21:02.901383 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-pbmt2" event={"ID":"1e335b54-f84b-4d91-a58e-0348728d171e","Type":"ContainerStarted","Data":"85f2de70a474827669ff8a12f4353b3ab35daa76ded56a05cba0f38d3301e720"} Dec 05 19:21:02 crc kubenswrapper[4828]: I1205 19:21:02.940399 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-cfnbh" event={"ID":"a5d6b211-6f88-45fe-8e38-608271465dfe","Type":"ContainerStarted","Data":"b284a4d77152b3a4353d01d8c10cb8efc3e8c3ae9cdfac6e220a056e56953558"} Dec 05 19:21:02 crc kubenswrapper[4828]: I1205 19:21:02.941442 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w4rgq" podStartSLOduration=3.242438042 podStartE2EDuration="53.941424295s" podCreationTimestamp="2025-12-05 19:20:09 +0000 UTC" firstStartedPulling="2025-12-05 19:20:10.829127744 +0000 UTC m=+988.724350050" lastFinishedPulling="2025-12-05 19:21:01.528113977 +0000 UTC m=+1039.423336303" observedRunningTime="2025-12-05 19:21:02.890442548 +0000 UTC m=+1040.785664854" watchObservedRunningTime="2025-12-05 19:21:02.941424295 +0000 UTC m=+1040.836646601" Dec 05 19:21:02 crc kubenswrapper[4828]: I1205 19:21:02.942123 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-77twz" podStartSLOduration=29.870826351 podStartE2EDuration="54.942117885s" podCreationTimestamp="2025-12-05 19:20:08 +0000 UTC" firstStartedPulling="2025-12-05 19:20:10.683247367 +0000 UTC m=+988.578469673" lastFinishedPulling="2025-12-05 19:20:35.754538901 +0000 UTC m=+1013.649761207" observedRunningTime="2025-12-05 19:21:02.941284221 +0000 UTC m=+1040.836506527" watchObservedRunningTime="2025-12-05 19:21:02.942117885 +0000 UTC m=+1040.837340191" Dec 05 19:21:02 crc kubenswrapper[4828]: I1205 19:21:02.966064 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-cr94b" event={"ID":"16bfe264-a5d1-433e-93ee-c6821e882c4c","Type":"ContainerStarted","Data":"9a8fd514b2ede234d93282c97454ca1517203751d570637d3238a4325937a3d3"} Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.006123 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-l6gtp" event={"ID":"757d5884-94d5-45f1-ae2c-49fd93ce512c","Type":"ContainerStarted","Data":"3888e0ae753f05c1e49f8f660f624e3372a397f944f0353f3435a2f12b0308d4"} Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.007284 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-l6gtp" Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.017343 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-l6gtp" Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.024801 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-cf5gg" event={"ID":"cd2986fb-f299-446c-85b7-28427df0ca51","Type":"ContainerStarted","Data":"e3038f6a0bf77ea65cc3584a1ce84624bc25c69c73c1d73439e06c67eb690e01"} Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.043139 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-v92pz" event={"ID":"a27719f3-1ce1-4a2b-876f-f280966f8e8c","Type":"ContainerStarted","Data":"1360575dffebaa5428e6ccccabb7b7a9ab693cf1cf21203d7ccb66fd7979c1b1"} Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.119250 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-jbg6n" event={"ID":"1f1ef15a-9832-4ee5-8077-066329f6180a","Type":"ContainerStarted","Data":"3b623025c32b4b1005648ee1de751dd0a4fbaeef7f390951d20dc6e99dc0bc12"} Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.133837 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-g2wd4" event={"ID":"7dbe4cda-8493-4e63-9544-7dfff2495c65","Type":"ContainerStarted","Data":"955ae7cf16d0f25f402d807f7c3f6124261783d16bd4139e5c8b62d838822119"} Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.136639 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6xg2c" event={"ID":"48135908-b8f6-47ab-aeb7-3f74bb3e2cde","Type":"ContainerStarted","Data":"55c78491a9c72f524f6efa33a45cb158fa5d7a01d555c468b70dae449b270228"} Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.137492 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6xg2c" Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.140439 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx" event={"ID":"408ecf49-524f-4743-9cef-5c65877dd176","Type":"ContainerStarted","Data":"51892ce229aaddd365a2b3fb7c6f8a09f2f46d0a0b6de1ecdef3efe35646bc6b"} Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.146573 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx" Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.157387 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cgkv6" event={"ID":"3b18c18d-624d-4d50-95ba-a4f755f74936","Type":"ContainerStarted","Data":"3079f83e4d10120ffc63397a5aa458bd501eb70339ef2226ebf8007af63678d9"} Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.173705 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6xg2c" Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.174109 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-4gr5g" event={"ID":"ba58375c-b3fa-4eb8-8813-c55f003674ca","Type":"ContainerStarted","Data":"8154ed0184bc637d082d572fa6f2e6e12dd1232191bfbbd04446d225cf1de96c"} Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.174900 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-4gr5g" Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.175892 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-qdslg" podStartSLOduration=25.963360279 podStartE2EDuration="54.17587849s" podCreationTimestamp="2025-12-05 19:20:09 +0000 UTC" firstStartedPulling="2025-12-05 19:20:10.881328335 +0000 UTC m=+988.776550641" lastFinishedPulling="2025-12-05 19:20:39.093846546 +0000 UTC m=+1016.989068852" observedRunningTime="2025-12-05 19:21:02.982784579 +0000 UTC m=+1040.878006895" watchObservedRunningTime="2025-12-05 19:21:03.17587849 +0000 UTC m=+1041.071100806" Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.189279 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-4gr5g" Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.195283 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j" event={"ID":"1ce74c6c-ee96-4712-983f-4090e176f31e","Type":"ContainerStarted","Data":"75d595ee9c5875d9c0dcb6a84e0567780063acac1705c722dc501bebed16484b"} Dec 05 19:21:03 crc kubenswrapper[4828]: E1205 19:21:03.200669 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j" podUID="1ce74c6c-ee96-4712-983f-4090e176f31e" Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.203871 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-9jbwm" event={"ID":"04671bff-8616-471f-bd46-21e6b17227eb","Type":"ContainerStarted","Data":"582112fda4c5ed731d893e1bd07f233b3d1633151972b24c0af674b52c58b7fb"} Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.204848 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-9jbwm" Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.210039 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-9jbwm" Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.224067 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-gsczh" event={"ID":"a03904e7-57be-4491-b11d-c8e698b718e6","Type":"ContainerStarted","Data":"3c0e7dfc67801d1a727db0eb5a52d742ef1a8f04b373a87a4d84e5fc35deafa2"} Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.233875 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qftqg" event={"ID":"4276bd34-acab-4936-a044-7d00e33e806f","Type":"ContainerStarted","Data":"e86b8d6b64d1ecbf341cc5974da1378905fd28254b6faef7e3049a0a5aa91948"} Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.234740 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qftqg" Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.244030 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qftqg" Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.265185 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx" podStartSLOduration=54.265163137 podStartE2EDuration="54.265163137s" podCreationTimestamp="2025-12-05 19:20:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:21:03.261603069 +0000 UTC m=+1041.156825405" watchObservedRunningTime="2025-12-05 19:21:03.265163137 +0000 UTC m=+1041.160385443" Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.395338 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-4gr5g" podStartSLOduration=3.566468493 podStartE2EDuration="54.395318154s" podCreationTimestamp="2025-12-05 19:20:09 +0000 UTC" firstStartedPulling="2025-12-05 19:20:10.830914924 +0000 UTC m=+988.726137240" lastFinishedPulling="2025-12-05 19:21:01.659764595 +0000 UTC m=+1039.554986901" observedRunningTime="2025-12-05 19:21:03.393591646 +0000 UTC m=+1041.288813962" watchObservedRunningTime="2025-12-05 19:21:03.395318154 +0000 UTC m=+1041.290540450" Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.467011 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-l6gtp" podStartSLOduration=27.172149761 podStartE2EDuration="55.466994218s" podCreationTimestamp="2025-12-05 19:20:08 +0000 UTC" firstStartedPulling="2025-12-05 19:20:10.863173438 +0000 UTC m=+988.758395744" lastFinishedPulling="2025-12-05 19:20:39.158017875 +0000 UTC m=+1017.053240201" observedRunningTime="2025-12-05 19:21:03.441765666 +0000 UTC m=+1041.336987972" watchObservedRunningTime="2025-12-05 19:21:03.466994218 +0000 UTC m=+1041.362216524" Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.472930 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-6xg2c" podStartSLOduration=3.721895181 podStartE2EDuration="54.472908149s" podCreationTimestamp="2025-12-05 19:20:09 +0000 UTC" firstStartedPulling="2025-12-05 19:20:10.875937158 +0000 UTC m=+988.771159464" lastFinishedPulling="2025-12-05 19:21:01.626950126 +0000 UTC m=+1039.522172432" observedRunningTime="2025-12-05 19:21:03.463908363 +0000 UTC m=+1041.359130669" watchObservedRunningTime="2025-12-05 19:21:03.472908149 +0000 UTC m=+1041.368130455" Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.609183 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-9jbwm" podStartSLOduration=32.283861604 podStartE2EDuration="55.609167413s" podCreationTimestamp="2025-12-05 19:20:08 +0000 UTC" firstStartedPulling="2025-12-05 19:20:10.248130424 +0000 UTC m=+988.143352730" lastFinishedPulling="2025-12-05 19:20:33.573436233 +0000 UTC m=+1011.468658539" observedRunningTime="2025-12-05 19:21:03.605775451 +0000 UTC m=+1041.500997767" watchObservedRunningTime="2025-12-05 19:21:03.609167413 +0000 UTC m=+1041.504389719" Dec 05 19:21:03 crc kubenswrapper[4828]: I1205 19:21:03.626687 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-qftqg" podStartSLOduration=29.885300357 podStartE2EDuration="55.626669463s" podCreationTimestamp="2025-12-05 19:20:08 +0000 UTC" firstStartedPulling="2025-12-05 19:20:10.013201346 +0000 UTC m=+987.908423652" lastFinishedPulling="2025-12-05 19:20:35.754570452 +0000 UTC m=+1013.649792758" observedRunningTime="2025-12-05 19:21:03.623494426 +0000 UTC m=+1041.518716732" watchObservedRunningTime="2025-12-05 19:21:03.626669463 +0000 UTC m=+1041.521891769" Dec 05 19:21:04 crc kubenswrapper[4828]: I1205 19:21:04.241693 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx" event={"ID":"408ecf49-524f-4743-9cef-5c65877dd176","Type":"ContainerStarted","Data":"f540129bc6494052fc31d7a72e408c6fe45176852c0d63df44a4494ae466f205"} Dec 05 19:21:04 crc kubenswrapper[4828]: I1205 19:21:04.243587 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" event={"ID":"03c4fc5d-6be1-47b4-9c39-7bb86046dafd","Type":"ContainerStarted","Data":"d5def740b3f0518b847d3234961782031960c932ffa15ed250f9eaa6ebc7f3af"} Dec 05 19:21:04 crc kubenswrapper[4828]: I1205 19:21:04.243906 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:21:04 crc kubenswrapper[4828]: I1205 19:21:04.245464 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-hdxm9" event={"ID":"8c1110f4-40af-416e-9624-22a901897000","Type":"ContainerStarted","Data":"2765877f40b01f6f26b85c03746f2ed771369ed8ee3f350fce9c72f77b717306"} Dec 05 19:21:04 crc kubenswrapper[4828]: I1205 19:21:04.245681 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-hdxm9" Dec 05 19:21:04 crc kubenswrapper[4828]: I1205 19:21:04.248195 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-hdxm9" Dec 05 19:21:04 crc kubenswrapper[4828]: I1205 19:21:04.248446 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h2d97" event={"ID":"13474ecf-c76e-400f-bc72-70c11ab8356b","Type":"ContainerStarted","Data":"9f118dead40f33cc23c5b24556cb27a610247b6337db71f42f0d6ebb77276605"} Dec 05 19:21:04 crc kubenswrapper[4828]: I1205 19:21:04.251482 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-k7qf5" event={"ID":"f5bca056-89ff-4e36-82b7-ad44d9dc00d6","Type":"ContainerStarted","Data":"741d8d576075a2c89f5826d774dce8760052bdb78f8ce717bcaf053081b012ac"} Dec 05 19:21:04 crc kubenswrapper[4828]: I1205 19:21:04.252053 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-k7qf5" Dec 05 19:21:04 crc kubenswrapper[4828]: I1205 19:21:04.253584 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-k7qf5" Dec 05 19:21:04 crc kubenswrapper[4828]: I1205 19:21:04.268231 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podStartSLOduration=33.358656688 podStartE2EDuration="56.268211803s" podCreationTimestamp="2025-12-05 19:20:08 +0000 UTC" firstStartedPulling="2025-12-05 19:20:38.624381752 +0000 UTC m=+1016.519604058" lastFinishedPulling="2025-12-05 19:21:01.533936867 +0000 UTC m=+1039.429159173" observedRunningTime="2025-12-05 19:21:04.261949072 +0000 UTC m=+1042.157171388" watchObservedRunningTime="2025-12-05 19:21:04.268211803 +0000 UTC m=+1042.163434109" Dec 05 19:21:04 crc kubenswrapper[4828]: I1205 19:21:04.343448 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-hdxm9" podStartSLOduration=4.582266187 podStartE2EDuration="55.343432014s" podCreationTimestamp="2025-12-05 19:20:09 +0000 UTC" firstStartedPulling="2025-12-05 19:20:10.838972384 +0000 UTC m=+988.734194690" lastFinishedPulling="2025-12-05 19:21:01.600138211 +0000 UTC m=+1039.495360517" observedRunningTime="2025-12-05 19:21:04.340450313 +0000 UTC m=+1042.235672619" watchObservedRunningTime="2025-12-05 19:21:04.343432014 +0000 UTC m=+1042.238654320" Dec 05 19:21:04 crc kubenswrapper[4828]: I1205 19:21:04.366566 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-k7qf5" podStartSLOduration=31.464297945 podStartE2EDuration="56.366549307s" podCreationTimestamp="2025-12-05 19:20:08 +0000 UTC" firstStartedPulling="2025-12-05 19:20:10.273501969 +0000 UTC m=+988.168724275" lastFinishedPulling="2025-12-05 19:20:35.175753331 +0000 UTC m=+1013.070975637" observedRunningTime="2025-12-05 19:21:04.361765447 +0000 UTC m=+1042.256987763" watchObservedRunningTime="2025-12-05 19:21:04.366549307 +0000 UTC m=+1042.261771613" Dec 05 19:21:04 crc kubenswrapper[4828]: E1205 19:21:04.604298 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j" podUID="1ce74c6c-ee96-4712-983f-4090e176f31e" Dec 05 19:21:06 crc kubenswrapper[4828]: I1205 19:21:06.265751 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-gsczh" event={"ID":"a03904e7-57be-4491-b11d-c8e698b718e6","Type":"ContainerStarted","Data":"e777e923dd098c418bdf9ba578d3ed0aca10a655935f4c6156d6027fa4a9c235"} Dec 05 19:21:06 crc kubenswrapper[4828]: I1205 19:21:06.267034 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-gsczh" Dec 05 19:21:06 crc kubenswrapper[4828]: I1205 19:21:06.269318 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-g2wd4" event={"ID":"7dbe4cda-8493-4e63-9544-7dfff2495c65","Type":"ContainerStarted","Data":"7fa654443b73c3dad62b94c6f86b045bb01b438c6df07ed61ac7611e664c7e81"} Dec 05 19:21:06 crc kubenswrapper[4828]: I1205 19:21:06.269725 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-g2wd4" Dec 05 19:21:06 crc kubenswrapper[4828]: I1205 19:21:06.271939 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-pbmt2" event={"ID":"1e335b54-f84b-4d91-a58e-0348728d171e","Type":"ContainerStarted","Data":"4c9930727911f3475d8963300b72d90565853bf36f524b58755c4e67c90f812c"} Dec 05 19:21:06 crc kubenswrapper[4828]: I1205 19:21:06.272431 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-pbmt2" Dec 05 19:21:06 crc kubenswrapper[4828]: I1205 19:21:06.277608 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-cfnbh" event={"ID":"a5d6b211-6f88-45fe-8e38-608271465dfe","Type":"ContainerStarted","Data":"3030b26a284f49da4a197216042cc76fc1a09ca440e410954bb2492f005407a3"} Dec 05 19:21:06 crc kubenswrapper[4828]: I1205 19:21:06.277735 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-cfnbh" Dec 05 19:21:06 crc kubenswrapper[4828]: I1205 19:21:06.279046 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-cf5gg" event={"ID":"cd2986fb-f299-446c-85b7-28427df0ca51","Type":"ContainerStarted","Data":"01e485f18e186bc72864aa2f0f7fda2af1ecf997a49b2beed613cdc7f32f95d5"} Dec 05 19:21:06 crc kubenswrapper[4828]: I1205 19:21:06.279449 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-cf5gg" Dec 05 19:21:06 crc kubenswrapper[4828]: I1205 19:21:06.280724 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-jbg6n" event={"ID":"1f1ef15a-9832-4ee5-8077-066329f6180a","Type":"ContainerStarted","Data":"a53bf39a1431bb1113440611b4ecde6ad630654b258240f00ed18887ce37c94d"} Dec 05 19:21:06 crc kubenswrapper[4828]: I1205 19:21:06.281120 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-jbg6n" Dec 05 19:21:06 crc kubenswrapper[4828]: I1205 19:21:06.289522 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cgkv6" event={"ID":"3b18c18d-624d-4d50-95ba-a4f755f74936","Type":"ContainerStarted","Data":"c8289f79a2213f6f9b922247db20475151641cfb31681561eed5e8440fee4a8b"} Dec 05 19:21:06 crc kubenswrapper[4828]: I1205 19:21:06.290151 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cgkv6" Dec 05 19:21:06 crc kubenswrapper[4828]: I1205 19:21:06.291907 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-cr94b" event={"ID":"16bfe264-a5d1-433e-93ee-c6821e882c4c","Type":"ContainerStarted","Data":"4eeb546ab8c5eab32c1f1d6f194cfa615df3cc2140ea550a50c922c9c800ca03"} Dec 05 19:21:06 crc kubenswrapper[4828]: I1205 19:21:06.292271 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-cr94b" Dec 05 19:21:06 crc kubenswrapper[4828]: I1205 19:21:06.300872 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-v92pz" event={"ID":"a27719f3-1ce1-4a2b-876f-f280966f8e8c","Type":"ContainerStarted","Data":"1a8493aba13344bd79589dc0b633396c810b5a8a5bf5b687e77762dca627beb9"} Dec 05 19:21:06 crc kubenswrapper[4828]: I1205 19:21:06.300908 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-v92pz" Dec 05 19:21:06 crc kubenswrapper[4828]: I1205 19:21:06.316252 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h2d97" podStartSLOduration=29.631975268 podStartE2EDuration="57.316239464s" podCreationTimestamp="2025-12-05 19:20:09 +0000 UTC" firstStartedPulling="2025-12-05 19:20:10.844382673 +0000 UTC m=+988.739604969" lastFinishedPulling="2025-12-05 19:20:38.528646859 +0000 UTC m=+1016.423869165" observedRunningTime="2025-12-05 19:21:04.43160428 +0000 UTC m=+1042.326826596" watchObservedRunningTime="2025-12-05 19:21:06.316239464 +0000 UTC m=+1044.211461770" Dec 05 19:21:06 crc kubenswrapper[4828]: I1205 19:21:06.317248 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-gsczh" podStartSLOduration=3.370921942 podStartE2EDuration="58.317244051s" podCreationTimestamp="2025-12-05 19:20:08 +0000 UTC" firstStartedPulling="2025-12-05 19:20:10.433570486 +0000 UTC m=+988.328792792" lastFinishedPulling="2025-12-05 19:21:05.379892585 +0000 UTC m=+1043.275114901" observedRunningTime="2025-12-05 19:21:06.313993882 +0000 UTC m=+1044.209216188" watchObservedRunningTime="2025-12-05 19:21:06.317244051 +0000 UTC m=+1044.212466357" Dec 05 19:21:06 crc kubenswrapper[4828]: I1205 19:21:06.355495 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-v92pz" podStartSLOduration=3.981844421 podStartE2EDuration="58.355475849s" podCreationTimestamp="2025-12-05 19:20:08 +0000 UTC" firstStartedPulling="2025-12-05 19:20:10.241339687 +0000 UTC m=+988.136561983" lastFinishedPulling="2025-12-05 19:21:04.614971105 +0000 UTC m=+1042.510193411" observedRunningTime="2025-12-05 19:21:06.349019692 +0000 UTC m=+1044.244241998" watchObservedRunningTime="2025-12-05 19:21:06.355475849 +0000 UTC m=+1044.250698155" Dec 05 19:21:06 crc kubenswrapper[4828]: I1205 19:21:06.489005 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-g2wd4" podStartSLOduration=3.25854604 podStartE2EDuration="58.488989587s" podCreationTimestamp="2025-12-05 19:20:08 +0000 UTC" firstStartedPulling="2025-12-05 19:20:10.241405849 +0000 UTC m=+988.136628155" lastFinishedPulling="2025-12-05 19:21:05.471849396 +0000 UTC m=+1043.367071702" observedRunningTime="2025-12-05 19:21:06.485746709 +0000 UTC m=+1044.380969015" watchObservedRunningTime="2025-12-05 19:21:06.488989587 +0000 UTC m=+1044.384211893" Dec 05 19:21:06 crc kubenswrapper[4828]: I1205 19:21:06.490187 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cgkv6" podStartSLOduration=3.663781137 podStartE2EDuration="58.49018137s" podCreationTimestamp="2025-12-05 19:20:08 +0000 UTC" firstStartedPulling="2025-12-05 19:20:10.721725532 +0000 UTC m=+988.616947838" lastFinishedPulling="2025-12-05 19:21:05.548125765 +0000 UTC m=+1043.443348071" observedRunningTime="2025-12-05 19:21:06.441574438 +0000 UTC m=+1044.336796744" watchObservedRunningTime="2025-12-05 19:21:06.49018137 +0000 UTC m=+1044.385403676" Dec 05 19:21:06 crc kubenswrapper[4828]: I1205 19:21:06.539633 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-cfnbh" podStartSLOduration=5.479163901 podStartE2EDuration="58.539614164s" podCreationTimestamp="2025-12-05 19:20:08 +0000 UTC" firstStartedPulling="2025-12-05 19:20:10.719896011 +0000 UTC m=+988.615118317" lastFinishedPulling="2025-12-05 19:21:03.780346274 +0000 UTC m=+1041.675568580" observedRunningTime="2025-12-05 19:21:06.536667594 +0000 UTC m=+1044.431889900" watchObservedRunningTime="2025-12-05 19:21:06.539614164 +0000 UTC m=+1044.434836470" Dec 05 19:21:06 crc kubenswrapper[4828]: I1205 19:21:06.569341 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-jbg6n" podStartSLOduration=3.370389026 podStartE2EDuration="58.569295498s" podCreationTimestamp="2025-12-05 19:20:08 +0000 UTC" firstStartedPulling="2025-12-05 19:20:10.183065471 +0000 UTC m=+988.078287777" lastFinishedPulling="2025-12-05 19:21:05.381971943 +0000 UTC m=+1043.277194249" observedRunningTime="2025-12-05 19:21:06.56535925 +0000 UTC m=+1044.460581556" watchObservedRunningTime="2025-12-05 19:21:06.569295498 +0000 UTC m=+1044.464517804" Dec 05 19:21:06 crc kubenswrapper[4828]: I1205 19:21:06.610353 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-cf5gg" podStartSLOduration=3.677945607 podStartE2EDuration="57.610339003s" podCreationTimestamp="2025-12-05 19:20:09 +0000 UTC" firstStartedPulling="2025-12-05 19:20:10.683013971 +0000 UTC m=+988.578236277" lastFinishedPulling="2025-12-05 19:21:04.615407367 +0000 UTC m=+1042.510629673" observedRunningTime="2025-12-05 19:21:06.605362996 +0000 UTC m=+1044.500585302" watchObservedRunningTime="2025-12-05 19:21:06.610339003 +0000 UTC m=+1044.505561309" Dec 05 19:21:06 crc kubenswrapper[4828]: I1205 19:21:06.646383 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-cr94b" podStartSLOduration=4.255722047 podStartE2EDuration="58.6463538s" podCreationTimestamp="2025-12-05 19:20:08 +0000 UTC" firstStartedPulling="2025-12-05 19:20:10.214844442 +0000 UTC m=+988.110066748" lastFinishedPulling="2025-12-05 19:21:04.605476195 +0000 UTC m=+1042.500698501" observedRunningTime="2025-12-05 19:21:06.632312175 +0000 UTC m=+1044.527534481" watchObservedRunningTime="2025-12-05 19:21:06.6463538 +0000 UTC m=+1044.541576106" Dec 05 19:21:06 crc kubenswrapper[4828]: I1205 19:21:06.698331 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-pbmt2" podStartSLOduration=3.6513877470000002 podStartE2EDuration="58.698311934s" podCreationTimestamp="2025-12-05 19:20:08 +0000 UTC" firstStartedPulling="2025-12-05 19:20:10.41804173 +0000 UTC m=+988.313264036" lastFinishedPulling="2025-12-05 19:21:05.464965917 +0000 UTC m=+1043.360188223" observedRunningTime="2025-12-05 19:21:06.691697452 +0000 UTC m=+1044.586919758" watchObservedRunningTime="2025-12-05 19:21:06.698311934 +0000 UTC m=+1044.593534260" Dec 05 19:21:11 crc kubenswrapper[4828]: I1205 19:21:11.789430 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-d5958f94b-76zjx" Dec 05 19:21:13 crc kubenswrapper[4828]: I1205 19:21:13.488713 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcnvv" event={"ID":"64594188-f4e0-4d0b-a5f9-e118634e82cb","Type":"ContainerStarted","Data":"994fc0b28bb7d79b71040feab1e862b020d9b8cff981e820c3028b32dd36343d"} Dec 05 19:21:14 crc kubenswrapper[4828]: I1205 19:21:14.497520 4828 generic.go:334] "Generic (PLEG): container finished" podID="64594188-f4e0-4d0b-a5f9-e118634e82cb" containerID="994fc0b28bb7d79b71040feab1e862b020d9b8cff981e820c3028b32dd36343d" exitCode=0 Dec 05 19:21:14 crc kubenswrapper[4828]: I1205 19:21:14.497563 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcnvv" event={"ID":"64594188-f4e0-4d0b-a5f9-e118634e82cb","Type":"ContainerDied","Data":"994fc0b28bb7d79b71040feab1e862b020d9b8cff981e820c3028b32dd36343d"} Dec 05 19:21:15 crc kubenswrapper[4828]: I1205 19:21:15.124383 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:21:15 crc kubenswrapper[4828]: I1205 19:21:15.510439 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcnvv" event={"ID":"64594188-f4e0-4d0b-a5f9-e118634e82cb","Type":"ContainerStarted","Data":"1b09c4aebac097250cf6d9badee068df6a198bc6faedf8a273040a878d64a255"} Dec 05 19:21:15 crc kubenswrapper[4828]: I1205 19:21:15.531283 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lcnvv" podStartSLOduration=30.326613814 podStartE2EDuration="1m2.531262748s" podCreationTimestamp="2025-12-05 19:20:13 +0000 UTC" firstStartedPulling="2025-12-05 19:20:42.733011848 +0000 UTC m=+1020.628234154" lastFinishedPulling="2025-12-05 19:21:14.937660732 +0000 UTC m=+1052.832883088" observedRunningTime="2025-12-05 19:21:15.523446244 +0000 UTC m=+1053.418668550" watchObservedRunningTime="2025-12-05 19:21:15.531262748 +0000 UTC m=+1053.426485054" Dec 05 19:21:19 crc kubenswrapper[4828]: I1205 19:21:19.026854 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-jbg6n" Dec 05 19:21:19 crc kubenswrapper[4828]: I1205 19:21:19.119728 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-cr94b" Dec 05 19:21:19 crc kubenswrapper[4828]: I1205 19:21:19.133978 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-v92pz" Dec 05 19:21:19 crc kubenswrapper[4828]: I1205 19:21:19.161155 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-g2wd4" Dec 05 19:21:19 crc kubenswrapper[4828]: I1205 19:21:19.303493 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-cgkv6" Dec 05 19:21:19 crc kubenswrapper[4828]: I1205 19:21:19.336376 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-pbmt2" Dec 05 19:21:19 crc kubenswrapper[4828]: I1205 19:21:19.453347 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-gsczh" Dec 05 19:21:19 crc kubenswrapper[4828]: I1205 19:21:19.528381 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-cfnbh" Dec 05 19:21:19 crc kubenswrapper[4828]: I1205 19:21:19.549615 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j" event={"ID":"1ce74c6c-ee96-4712-983f-4090e176f31e","Type":"ContainerStarted","Data":"10e5756af2fbe69bc1b337cf4150ee3b7df7a1fbb55da757629435663e86f2ae"} Dec 05 19:21:19 crc kubenswrapper[4828]: I1205 19:21:19.549834 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j" Dec 05 19:21:19 crc kubenswrapper[4828]: I1205 19:21:19.608457 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j" podStartSLOduration=31.331493057 podStartE2EDuration="1m11.608439713s" podCreationTimestamp="2025-12-05 19:20:08 +0000 UTC" firstStartedPulling="2025-12-05 19:20:39.015722815 +0000 UTC m=+1016.910945121" lastFinishedPulling="2025-12-05 19:21:19.292669481 +0000 UTC m=+1057.187891777" observedRunningTime="2025-12-05 19:21:19.600234138 +0000 UTC m=+1057.495456464" watchObservedRunningTime="2025-12-05 19:21:19.608439713 +0000 UTC m=+1057.503662019" Dec 05 19:21:19 crc kubenswrapper[4828]: I1205 19:21:19.608978 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-cf5gg" Dec 05 19:21:24 crc kubenswrapper[4828]: I1205 19:21:24.255951 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lcnvv" Dec 05 19:21:24 crc kubenswrapper[4828]: I1205 19:21:24.256337 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lcnvv" Dec 05 19:21:24 crc kubenswrapper[4828]: I1205 19:21:24.302746 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lcnvv" Dec 05 19:21:24 crc kubenswrapper[4828]: I1205 19:21:24.637268 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lcnvv" Dec 05 19:21:24 crc kubenswrapper[4828]: I1205 19:21:24.689800 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcnvv"] Dec 05 19:21:25 crc kubenswrapper[4828]: I1205 19:21:25.466312 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j" Dec 05 19:21:26 crc kubenswrapper[4828]: I1205 19:21:26.602696 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lcnvv" podUID="64594188-f4e0-4d0b-a5f9-e118634e82cb" containerName="registry-server" containerID="cri-o://1b09c4aebac097250cf6d9badee068df6a198bc6faedf8a273040a878d64a255" gracePeriod=2 Dec 05 19:21:27 crc kubenswrapper[4828]: I1205 19:21:27.497440 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcnvv" Dec 05 19:21:27 crc kubenswrapper[4828]: I1205 19:21:27.588635 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wznr\" (UniqueName: \"kubernetes.io/projected/64594188-f4e0-4d0b-a5f9-e118634e82cb-kube-api-access-5wznr\") pod \"64594188-f4e0-4d0b-a5f9-e118634e82cb\" (UID: \"64594188-f4e0-4d0b-a5f9-e118634e82cb\") " Dec 05 19:21:27 crc kubenswrapper[4828]: I1205 19:21:27.588770 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64594188-f4e0-4d0b-a5f9-e118634e82cb-catalog-content\") pod \"64594188-f4e0-4d0b-a5f9-e118634e82cb\" (UID: \"64594188-f4e0-4d0b-a5f9-e118634e82cb\") " Dec 05 19:21:27 crc kubenswrapper[4828]: I1205 19:21:27.588799 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64594188-f4e0-4d0b-a5f9-e118634e82cb-utilities\") pod \"64594188-f4e0-4d0b-a5f9-e118634e82cb\" (UID: \"64594188-f4e0-4d0b-a5f9-e118634e82cb\") " Dec 05 19:21:27 crc kubenswrapper[4828]: I1205 19:21:27.591437 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64594188-f4e0-4d0b-a5f9-e118634e82cb-utilities" (OuterVolumeSpecName: "utilities") pod "64594188-f4e0-4d0b-a5f9-e118634e82cb" (UID: "64594188-f4e0-4d0b-a5f9-e118634e82cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:21:27 crc kubenswrapper[4828]: I1205 19:21:27.606469 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64594188-f4e0-4d0b-a5f9-e118634e82cb-kube-api-access-5wznr" (OuterVolumeSpecName: "kube-api-access-5wznr") pod "64594188-f4e0-4d0b-a5f9-e118634e82cb" (UID: "64594188-f4e0-4d0b-a5f9-e118634e82cb"). InnerVolumeSpecName "kube-api-access-5wznr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:21:27 crc kubenswrapper[4828]: I1205 19:21:27.620288 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64594188-f4e0-4d0b-a5f9-e118634e82cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64594188-f4e0-4d0b-a5f9-e118634e82cb" (UID: "64594188-f4e0-4d0b-a5f9-e118634e82cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:21:27 crc kubenswrapper[4828]: I1205 19:21:27.621301 4828 generic.go:334] "Generic (PLEG): container finished" podID="64594188-f4e0-4d0b-a5f9-e118634e82cb" containerID="1b09c4aebac097250cf6d9badee068df6a198bc6faedf8a273040a878d64a255" exitCode=0 Dec 05 19:21:27 crc kubenswrapper[4828]: I1205 19:21:27.621371 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcnvv" event={"ID":"64594188-f4e0-4d0b-a5f9-e118634e82cb","Type":"ContainerDied","Data":"1b09c4aebac097250cf6d9badee068df6a198bc6faedf8a273040a878d64a255"} Dec 05 19:21:27 crc kubenswrapper[4828]: I1205 19:21:27.621431 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcnvv" event={"ID":"64594188-f4e0-4d0b-a5f9-e118634e82cb","Type":"ContainerDied","Data":"2b14f5cfc1eec376beacf4df3e0b786b22330f1e5f4c27ed83fc99a358d90abb"} Dec 05 19:21:27 crc kubenswrapper[4828]: I1205 19:21:27.621453 4828 scope.go:117] "RemoveContainer" containerID="1b09c4aebac097250cf6d9badee068df6a198bc6faedf8a273040a878d64a255" Dec 05 19:21:27 crc kubenswrapper[4828]: I1205 19:21:27.621663 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcnvv" Dec 05 19:21:27 crc kubenswrapper[4828]: I1205 19:21:27.641118 4828 scope.go:117] "RemoveContainer" containerID="994fc0b28bb7d79b71040feab1e862b020d9b8cff981e820c3028b32dd36343d" Dec 05 19:21:27 crc kubenswrapper[4828]: I1205 19:21:27.657835 4828 scope.go:117] "RemoveContainer" containerID="895c0fba81ad2f311572838c7f47d0bca8b676d05b383f2fcd2b55361440b44d" Dec 05 19:21:27 crc kubenswrapper[4828]: I1205 19:21:27.667186 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcnvv"] Dec 05 19:21:27 crc kubenswrapper[4828]: I1205 19:21:27.667306 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcnvv"] Dec 05 19:21:27 crc kubenswrapper[4828]: I1205 19:21:27.689926 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wznr\" (UniqueName: \"kubernetes.io/projected/64594188-f4e0-4d0b-a5f9-e118634e82cb-kube-api-access-5wznr\") on node \"crc\" DevicePath \"\"" Dec 05 19:21:27 crc kubenswrapper[4828]: I1205 19:21:27.689953 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64594188-f4e0-4d0b-a5f9-e118634e82cb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 19:21:27 crc kubenswrapper[4828]: I1205 19:21:27.689962 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64594188-f4e0-4d0b-a5f9-e118634e82cb-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 19:21:27 crc kubenswrapper[4828]: I1205 19:21:27.695478 4828 scope.go:117] "RemoveContainer" containerID="1b09c4aebac097250cf6d9badee068df6a198bc6faedf8a273040a878d64a255" Dec 05 19:21:27 crc kubenswrapper[4828]: E1205 19:21:27.695935 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b09c4aebac097250cf6d9badee068df6a198bc6faedf8a273040a878d64a255\": container with ID starting with 1b09c4aebac097250cf6d9badee068df6a198bc6faedf8a273040a878d64a255 not found: ID does not exist" containerID="1b09c4aebac097250cf6d9badee068df6a198bc6faedf8a273040a878d64a255" Dec 05 19:21:27 crc kubenswrapper[4828]: I1205 19:21:27.695967 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b09c4aebac097250cf6d9badee068df6a198bc6faedf8a273040a878d64a255"} err="failed to get container status \"1b09c4aebac097250cf6d9badee068df6a198bc6faedf8a273040a878d64a255\": rpc error: code = NotFound desc = could not find container \"1b09c4aebac097250cf6d9badee068df6a198bc6faedf8a273040a878d64a255\": container with ID starting with 1b09c4aebac097250cf6d9badee068df6a198bc6faedf8a273040a878d64a255 not found: ID does not exist" Dec 05 19:21:27 crc kubenswrapper[4828]: I1205 19:21:27.695989 4828 scope.go:117] "RemoveContainer" containerID="994fc0b28bb7d79b71040feab1e862b020d9b8cff981e820c3028b32dd36343d" Dec 05 19:21:27 crc kubenswrapper[4828]: E1205 19:21:27.696306 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"994fc0b28bb7d79b71040feab1e862b020d9b8cff981e820c3028b32dd36343d\": container with ID starting with 994fc0b28bb7d79b71040feab1e862b020d9b8cff981e820c3028b32dd36343d not found: ID does not exist" containerID="994fc0b28bb7d79b71040feab1e862b020d9b8cff981e820c3028b32dd36343d" Dec 05 19:21:27 crc kubenswrapper[4828]: I1205 19:21:27.696327 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"994fc0b28bb7d79b71040feab1e862b020d9b8cff981e820c3028b32dd36343d"} err="failed to get container status \"994fc0b28bb7d79b71040feab1e862b020d9b8cff981e820c3028b32dd36343d\": rpc error: code = NotFound desc = could not find container \"994fc0b28bb7d79b71040feab1e862b020d9b8cff981e820c3028b32dd36343d\": container with ID starting with 994fc0b28bb7d79b71040feab1e862b020d9b8cff981e820c3028b32dd36343d not found: ID does not exist" Dec 05 19:21:27 crc kubenswrapper[4828]: I1205 19:21:27.696360 4828 scope.go:117] "RemoveContainer" containerID="895c0fba81ad2f311572838c7f47d0bca8b676d05b383f2fcd2b55361440b44d" Dec 05 19:21:27 crc kubenswrapper[4828]: E1205 19:21:27.696594 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"895c0fba81ad2f311572838c7f47d0bca8b676d05b383f2fcd2b55361440b44d\": container with ID starting with 895c0fba81ad2f311572838c7f47d0bca8b676d05b383f2fcd2b55361440b44d not found: ID does not exist" containerID="895c0fba81ad2f311572838c7f47d0bca8b676d05b383f2fcd2b55361440b44d" Dec 05 19:21:27 crc kubenswrapper[4828]: I1205 19:21:27.696651 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"895c0fba81ad2f311572838c7f47d0bca8b676d05b383f2fcd2b55361440b44d"} err="failed to get container status \"895c0fba81ad2f311572838c7f47d0bca8b676d05b383f2fcd2b55361440b44d\": rpc error: code = NotFound desc = could not find container \"895c0fba81ad2f311572838c7f47d0bca8b676d05b383f2fcd2b55361440b44d\": container with ID starting with 895c0fba81ad2f311572838c7f47d0bca8b676d05b383f2fcd2b55361440b44d not found: ID does not exist" Dec 05 19:21:28 crc kubenswrapper[4828]: I1205 19:21:28.462717 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64594188-f4e0-4d0b-a5f9-e118634e82cb" path="/var/lib/kubelet/pods/64594188-f4e0-4d0b-a5f9-e118634e82cb/volumes" Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.630840 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6glxp"] Dec 05 19:21:48 crc kubenswrapper[4828]: E1205 19:21:48.633806 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64594188-f4e0-4d0b-a5f9-e118634e82cb" containerName="extract-utilities" Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.633842 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="64594188-f4e0-4d0b-a5f9-e118634e82cb" containerName="extract-utilities" Dec 05 19:21:48 crc kubenswrapper[4828]: E1205 19:21:48.633870 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64594188-f4e0-4d0b-a5f9-e118634e82cb" containerName="extract-content" Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.633876 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="64594188-f4e0-4d0b-a5f9-e118634e82cb" containerName="extract-content" Dec 05 19:21:48 crc kubenswrapper[4828]: E1205 19:21:48.633892 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64594188-f4e0-4d0b-a5f9-e118634e82cb" containerName="registry-server" Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.633898 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="64594188-f4e0-4d0b-a5f9-e118634e82cb" containerName="registry-server" Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.634039 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="64594188-f4e0-4d0b-a5f9-e118634e82cb" containerName="registry-server" Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.634772 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6glxp" Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.643929 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6glxp"] Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.644184 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-4jn7d" Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.644359 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.644497 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.644863 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.706799 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/621bc79a-9f33-4dbe-94e4-74cc81382cbb-config\") pod \"dnsmasq-dns-675f4bcbfc-6glxp\" (UID: \"621bc79a-9f33-4dbe-94e4-74cc81382cbb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6glxp" Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.706908 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sh9p\" (UniqueName: \"kubernetes.io/projected/621bc79a-9f33-4dbe-94e4-74cc81382cbb-kube-api-access-5sh9p\") pod \"dnsmasq-dns-675f4bcbfc-6glxp\" (UID: \"621bc79a-9f33-4dbe-94e4-74cc81382cbb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6glxp" Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.725710 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-drq5x"] Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.727282 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-drq5x" Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.733269 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.741663 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-drq5x"] Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.808464 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sh9p\" (UniqueName: \"kubernetes.io/projected/621bc79a-9f33-4dbe-94e4-74cc81382cbb-kube-api-access-5sh9p\") pod \"dnsmasq-dns-675f4bcbfc-6glxp\" (UID: \"621bc79a-9f33-4dbe-94e4-74cc81382cbb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6glxp" Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.808523 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d51fed5-9f28-495a-8a60-223fa1409caf-config\") pod \"dnsmasq-dns-78dd6ddcc-drq5x\" (UID: \"3d51fed5-9f28-495a-8a60-223fa1409caf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-drq5x" Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.808549 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57czk\" (UniqueName: \"kubernetes.io/projected/3d51fed5-9f28-495a-8a60-223fa1409caf-kube-api-access-57czk\") pod \"dnsmasq-dns-78dd6ddcc-drq5x\" (UID: \"3d51fed5-9f28-495a-8a60-223fa1409caf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-drq5x" Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.808778 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d51fed5-9f28-495a-8a60-223fa1409caf-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-drq5x\" (UID: \"3d51fed5-9f28-495a-8a60-223fa1409caf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-drq5x" Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.808909 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/621bc79a-9f33-4dbe-94e4-74cc81382cbb-config\") pod \"dnsmasq-dns-675f4bcbfc-6glxp\" (UID: \"621bc79a-9f33-4dbe-94e4-74cc81382cbb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6glxp" Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.810166 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/621bc79a-9f33-4dbe-94e4-74cc81382cbb-config\") pod \"dnsmasq-dns-675f4bcbfc-6glxp\" (UID: \"621bc79a-9f33-4dbe-94e4-74cc81382cbb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6glxp" Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.828152 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sh9p\" (UniqueName: \"kubernetes.io/projected/621bc79a-9f33-4dbe-94e4-74cc81382cbb-kube-api-access-5sh9p\") pod \"dnsmasq-dns-675f4bcbfc-6glxp\" (UID: \"621bc79a-9f33-4dbe-94e4-74cc81382cbb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6glxp" Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.910329 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d51fed5-9f28-495a-8a60-223fa1409caf-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-drq5x\" (UID: \"3d51fed5-9f28-495a-8a60-223fa1409caf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-drq5x" Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.910740 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d51fed5-9f28-495a-8a60-223fa1409caf-config\") pod \"dnsmasq-dns-78dd6ddcc-drq5x\" (UID: \"3d51fed5-9f28-495a-8a60-223fa1409caf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-drq5x" Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.910776 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57czk\" (UniqueName: \"kubernetes.io/projected/3d51fed5-9f28-495a-8a60-223fa1409caf-kube-api-access-57czk\") pod \"dnsmasq-dns-78dd6ddcc-drq5x\" (UID: \"3d51fed5-9f28-495a-8a60-223fa1409caf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-drq5x" Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.911253 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d51fed5-9f28-495a-8a60-223fa1409caf-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-drq5x\" (UID: \"3d51fed5-9f28-495a-8a60-223fa1409caf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-drq5x" Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.911383 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d51fed5-9f28-495a-8a60-223fa1409caf-config\") pod \"dnsmasq-dns-78dd6ddcc-drq5x\" (UID: \"3d51fed5-9f28-495a-8a60-223fa1409caf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-drq5x" Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.933142 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57czk\" (UniqueName: \"kubernetes.io/projected/3d51fed5-9f28-495a-8a60-223fa1409caf-kube-api-access-57czk\") pod \"dnsmasq-dns-78dd6ddcc-drq5x\" (UID: \"3d51fed5-9f28-495a-8a60-223fa1409caf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-drq5x" Dec 05 19:21:48 crc kubenswrapper[4828]: I1205 19:21:48.954352 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6glxp" Dec 05 19:21:49 crc kubenswrapper[4828]: I1205 19:21:49.047869 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-drq5x" Dec 05 19:21:49 crc kubenswrapper[4828]: I1205 19:21:49.405331 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6glxp"] Dec 05 19:21:49 crc kubenswrapper[4828]: W1205 19:21:49.420033 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod621bc79a_9f33_4dbe_94e4_74cc81382cbb.slice/crio-d2bfc026b777c92c87ef4557d37a2a54f2fb1a8de4c5d4e08adcc2ca96037a80 WatchSource:0}: Error finding container d2bfc026b777c92c87ef4557d37a2a54f2fb1a8de4c5d4e08adcc2ca96037a80: Status 404 returned error can't find the container with id d2bfc026b777c92c87ef4557d37a2a54f2fb1a8de4c5d4e08adcc2ca96037a80 Dec 05 19:21:49 crc kubenswrapper[4828]: I1205 19:21:49.571776 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-drq5x"] Dec 05 19:21:49 crc kubenswrapper[4828]: I1205 19:21:49.798814 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-drq5x" event={"ID":"3d51fed5-9f28-495a-8a60-223fa1409caf","Type":"ContainerStarted","Data":"35db591772c7100f8a0868028655444b47eb50a7ca48c164603c96376604fda6"} Dec 05 19:21:49 crc kubenswrapper[4828]: I1205 19:21:49.800104 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-6glxp" event={"ID":"621bc79a-9f33-4dbe-94e4-74cc81382cbb","Type":"ContainerStarted","Data":"d2bfc026b777c92c87ef4557d37a2a54f2fb1a8de4c5d4e08adcc2ca96037a80"} Dec 05 19:21:51 crc kubenswrapper[4828]: I1205 19:21:51.480669 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6glxp"] Dec 05 19:21:51 crc kubenswrapper[4828]: I1205 19:21:51.532153 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7j6xm"] Dec 05 19:21:51 crc kubenswrapper[4828]: I1205 19:21:51.537744 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7j6xm" Dec 05 19:21:51 crc kubenswrapper[4828]: I1205 19:21:51.544029 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7j6xm"] Dec 05 19:21:51 crc kubenswrapper[4828]: I1205 19:21:51.723068 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f33761e0-29f6-42c7-9c1d-cba24654a37a-dns-svc\") pod \"dnsmasq-dns-666b6646f7-7j6xm\" (UID: \"f33761e0-29f6-42c7-9c1d-cba24654a37a\") " pod="openstack/dnsmasq-dns-666b6646f7-7j6xm" Dec 05 19:21:51 crc kubenswrapper[4828]: I1205 19:21:51.723113 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f33761e0-29f6-42c7-9c1d-cba24654a37a-config\") pod \"dnsmasq-dns-666b6646f7-7j6xm\" (UID: \"f33761e0-29f6-42c7-9c1d-cba24654a37a\") " pod="openstack/dnsmasq-dns-666b6646f7-7j6xm" Dec 05 19:21:51 crc kubenswrapper[4828]: I1205 19:21:51.723140 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwpjm\" (UniqueName: \"kubernetes.io/projected/f33761e0-29f6-42c7-9c1d-cba24654a37a-kube-api-access-pwpjm\") pod \"dnsmasq-dns-666b6646f7-7j6xm\" (UID: \"f33761e0-29f6-42c7-9c1d-cba24654a37a\") " pod="openstack/dnsmasq-dns-666b6646f7-7j6xm" Dec 05 19:21:51 crc kubenswrapper[4828]: I1205 19:21:51.823783 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f33761e0-29f6-42c7-9c1d-cba24654a37a-dns-svc\") pod \"dnsmasq-dns-666b6646f7-7j6xm\" (UID: \"f33761e0-29f6-42c7-9c1d-cba24654a37a\") " pod="openstack/dnsmasq-dns-666b6646f7-7j6xm" Dec 05 19:21:51 crc kubenswrapper[4828]: I1205 19:21:51.823849 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f33761e0-29f6-42c7-9c1d-cba24654a37a-config\") pod \"dnsmasq-dns-666b6646f7-7j6xm\" (UID: \"f33761e0-29f6-42c7-9c1d-cba24654a37a\") " pod="openstack/dnsmasq-dns-666b6646f7-7j6xm" Dec 05 19:21:51 crc kubenswrapper[4828]: I1205 19:21:51.823879 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwpjm\" (UniqueName: \"kubernetes.io/projected/f33761e0-29f6-42c7-9c1d-cba24654a37a-kube-api-access-pwpjm\") pod \"dnsmasq-dns-666b6646f7-7j6xm\" (UID: \"f33761e0-29f6-42c7-9c1d-cba24654a37a\") " pod="openstack/dnsmasq-dns-666b6646f7-7j6xm" Dec 05 19:21:51 crc kubenswrapper[4828]: I1205 19:21:51.824743 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f33761e0-29f6-42c7-9c1d-cba24654a37a-dns-svc\") pod \"dnsmasq-dns-666b6646f7-7j6xm\" (UID: \"f33761e0-29f6-42c7-9c1d-cba24654a37a\") " pod="openstack/dnsmasq-dns-666b6646f7-7j6xm" Dec 05 19:21:51 crc kubenswrapper[4828]: I1205 19:21:51.824976 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f33761e0-29f6-42c7-9c1d-cba24654a37a-config\") pod \"dnsmasq-dns-666b6646f7-7j6xm\" (UID: \"f33761e0-29f6-42c7-9c1d-cba24654a37a\") " pod="openstack/dnsmasq-dns-666b6646f7-7j6xm" Dec 05 19:21:51 crc kubenswrapper[4828]: I1205 19:21:51.857391 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-drq5x"] Dec 05 19:21:51 crc kubenswrapper[4828]: I1205 19:21:51.865602 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwpjm\" (UniqueName: \"kubernetes.io/projected/f33761e0-29f6-42c7-9c1d-cba24654a37a-kube-api-access-pwpjm\") pod \"dnsmasq-dns-666b6646f7-7j6xm\" (UID: \"f33761e0-29f6-42c7-9c1d-cba24654a37a\") " pod="openstack/dnsmasq-dns-666b6646f7-7j6xm" Dec 05 19:21:51 crc kubenswrapper[4828]: I1205 19:21:51.869005 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7j6xm" Dec 05 19:21:51 crc kubenswrapper[4828]: I1205 19:21:51.886707 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-sqmk4"] Dec 05 19:21:51 crc kubenswrapper[4828]: I1205 19:21:51.892938 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-sqmk4" Dec 05 19:21:51 crc kubenswrapper[4828]: I1205 19:21:51.895256 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-sqmk4"] Dec 05 19:21:51 crc kubenswrapper[4828]: I1205 19:21:51.924712 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl8nx\" (UniqueName: \"kubernetes.io/projected/6fb6a146-a614-4339-9e07-8892e82fed36-kube-api-access-gl8nx\") pod \"dnsmasq-dns-57d769cc4f-sqmk4\" (UID: \"6fb6a146-a614-4339-9e07-8892e82fed36\") " pod="openstack/dnsmasq-dns-57d769cc4f-sqmk4" Dec 05 19:21:51 crc kubenswrapper[4828]: I1205 19:21:51.924798 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fb6a146-a614-4339-9e07-8892e82fed36-config\") pod \"dnsmasq-dns-57d769cc4f-sqmk4\" (UID: \"6fb6a146-a614-4339-9e07-8892e82fed36\") " pod="openstack/dnsmasq-dns-57d769cc4f-sqmk4" Dec 05 19:21:51 crc kubenswrapper[4828]: I1205 19:21:51.924816 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fb6a146-a614-4339-9e07-8892e82fed36-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-sqmk4\" (UID: \"6fb6a146-a614-4339-9e07-8892e82fed36\") " pod="openstack/dnsmasq-dns-57d769cc4f-sqmk4" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.027513 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fb6a146-a614-4339-9e07-8892e82fed36-config\") pod \"dnsmasq-dns-57d769cc4f-sqmk4\" (UID: \"6fb6a146-a614-4339-9e07-8892e82fed36\") " pod="openstack/dnsmasq-dns-57d769cc4f-sqmk4" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.027867 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fb6a146-a614-4339-9e07-8892e82fed36-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-sqmk4\" (UID: \"6fb6a146-a614-4339-9e07-8892e82fed36\") " pod="openstack/dnsmasq-dns-57d769cc4f-sqmk4" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.028599 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl8nx\" (UniqueName: \"kubernetes.io/projected/6fb6a146-a614-4339-9e07-8892e82fed36-kube-api-access-gl8nx\") pod \"dnsmasq-dns-57d769cc4f-sqmk4\" (UID: \"6fb6a146-a614-4339-9e07-8892e82fed36\") " pod="openstack/dnsmasq-dns-57d769cc4f-sqmk4" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.028797 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fb6a146-a614-4339-9e07-8892e82fed36-config\") pod \"dnsmasq-dns-57d769cc4f-sqmk4\" (UID: \"6fb6a146-a614-4339-9e07-8892e82fed36\") " pod="openstack/dnsmasq-dns-57d769cc4f-sqmk4" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.029019 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fb6a146-a614-4339-9e07-8892e82fed36-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-sqmk4\" (UID: \"6fb6a146-a614-4339-9e07-8892e82fed36\") " pod="openstack/dnsmasq-dns-57d769cc4f-sqmk4" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.046846 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl8nx\" (UniqueName: \"kubernetes.io/projected/6fb6a146-a614-4339-9e07-8892e82fed36-kube-api-access-gl8nx\") pod \"dnsmasq-dns-57d769cc4f-sqmk4\" (UID: \"6fb6a146-a614-4339-9e07-8892e82fed36\") " pod="openstack/dnsmasq-dns-57d769cc4f-sqmk4" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.296314 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-sqmk4" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.695092 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.696777 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.698755 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.698984 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.699396 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.699875 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.700099 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.700265 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.700513 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-sbwjw" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.708801 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.720397 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7j6xm"] Dec 05 19:21:52 crc kubenswrapper[4828]: W1205 19:21:52.730390 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf33761e0_29f6_42c7_9c1d_cba24654a37a.slice/crio-7abcdf18af1765934dc9ead9941e956740ca7d0444de4d57951559a1aa847018 WatchSource:0}: Error finding container 7abcdf18af1765934dc9ead9941e956740ca7d0444de4d57951559a1aa847018: Status 404 returned error can't find the container with id 7abcdf18af1765934dc9ead9941e956740ca7d0444de4d57951559a1aa847018 Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.828503 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7j6xm" event={"ID":"f33761e0-29f6-42c7-9c1d-cba24654a37a","Type":"ContainerStarted","Data":"7abcdf18af1765934dc9ead9941e956740ca7d0444de4d57951559a1aa847018"} Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.843755 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e21a851c-5179-4365-8e72-5dea16be90cc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.843813 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e21a851c-5179-4365-8e72-5dea16be90cc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.843873 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.843903 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e21a851c-5179-4365-8e72-5dea16be90cc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.843939 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e21a851c-5179-4365-8e72-5dea16be90cc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.843962 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e21a851c-5179-4365-8e72-5dea16be90cc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.844287 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqjv6\" (UniqueName: \"kubernetes.io/projected/e21a851c-5179-4365-8e72-5dea16be90cc-kube-api-access-pqjv6\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.844366 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e21a851c-5179-4365-8e72-5dea16be90cc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.844415 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e21a851c-5179-4365-8e72-5dea16be90cc-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.844435 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e21a851c-5179-4365-8e72-5dea16be90cc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.844457 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e21a851c-5179-4365-8e72-5dea16be90cc-config-data\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.945581 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqjv6\" (UniqueName: \"kubernetes.io/projected/e21a851c-5179-4365-8e72-5dea16be90cc-kube-api-access-pqjv6\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.945641 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e21a851c-5179-4365-8e72-5dea16be90cc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.945660 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e21a851c-5179-4365-8e72-5dea16be90cc-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.945679 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e21a851c-5179-4365-8e72-5dea16be90cc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.945696 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e21a851c-5179-4365-8e72-5dea16be90cc-config-data\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.946128 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e21a851c-5179-4365-8e72-5dea16be90cc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.946160 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e21a851c-5179-4365-8e72-5dea16be90cc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.946198 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.946223 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e21a851c-5179-4365-8e72-5dea16be90cc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.946296 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e21a851c-5179-4365-8e72-5dea16be90cc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.946330 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e21a851c-5179-4365-8e72-5dea16be90cc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.946640 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.947226 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e21a851c-5179-4365-8e72-5dea16be90cc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.947523 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e21a851c-5179-4365-8e72-5dea16be90cc-config-data\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.947630 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e21a851c-5179-4365-8e72-5dea16be90cc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.947784 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e21a851c-5179-4365-8e72-5dea16be90cc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.949768 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e21a851c-5179-4365-8e72-5dea16be90cc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.956995 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e21a851c-5179-4365-8e72-5dea16be90cc-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.958702 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e21a851c-5179-4365-8e72-5dea16be90cc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.959247 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e21a851c-5179-4365-8e72-5dea16be90cc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.961717 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e21a851c-5179-4365-8e72-5dea16be90cc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.964621 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqjv6\" (UniqueName: \"kubernetes.io/projected/e21a851c-5179-4365-8e72-5dea16be90cc-kube-api-access-pqjv6\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.972411 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " pod="openstack/rabbitmq-server-0" Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.991979 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 19:21:52 crc kubenswrapper[4828]: I1205 19:21:52.993184 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:52.998037 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:52.998397 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:52.999006 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:52.998139 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.002483 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.002772 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-l4l5h" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.006484 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.010157 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.048573 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.149561 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50db8d67-b1c6-4165-a526-8149092660ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.149813 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50db8d67-b1c6-4165-a526-8149092660ed-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.149898 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50db8d67-b1c6-4165-a526-8149092660ed-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.149962 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.150100 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzhv4\" (UniqueName: \"kubernetes.io/projected/50db8d67-b1c6-4165-a526-8149092660ed-kube-api-access-rzhv4\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.150153 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50db8d67-b1c6-4165-a526-8149092660ed-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.150189 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50db8d67-b1c6-4165-a526-8149092660ed-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.150231 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50db8d67-b1c6-4165-a526-8149092660ed-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.150286 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50db8d67-b1c6-4165-a526-8149092660ed-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.150313 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50db8d67-b1c6-4165-a526-8149092660ed-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.150367 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50db8d67-b1c6-4165-a526-8149092660ed-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.236426 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-sqmk4"] Dec 05 19:21:53 crc kubenswrapper[4828]: W1205 19:21:53.239283 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fb6a146_a614_4339_9e07_8892e82fed36.slice/crio-470dccda9196093045054ac03a70186e29764ae3f79636bb171a178c870f31e4 WatchSource:0}: Error finding container 470dccda9196093045054ac03a70186e29764ae3f79636bb171a178c870f31e4: Status 404 returned error can't find the container with id 470dccda9196093045054ac03a70186e29764ae3f79636bb171a178c870f31e4 Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.252650 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzhv4\" (UniqueName: \"kubernetes.io/projected/50db8d67-b1c6-4165-a526-8149092660ed-kube-api-access-rzhv4\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.252701 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50db8d67-b1c6-4165-a526-8149092660ed-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.252720 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50db8d67-b1c6-4165-a526-8149092660ed-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.252739 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50db8d67-b1c6-4165-a526-8149092660ed-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.252758 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50db8d67-b1c6-4165-a526-8149092660ed-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.252781 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50db8d67-b1c6-4165-a526-8149092660ed-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.252867 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50db8d67-b1c6-4165-a526-8149092660ed-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.252905 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50db8d67-b1c6-4165-a526-8149092660ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.252922 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50db8d67-b1c6-4165-a526-8149092660ed-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.252956 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50db8d67-b1c6-4165-a526-8149092660ed-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.252978 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.255977 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50db8d67-b1c6-4165-a526-8149092660ed-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.256745 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50db8d67-b1c6-4165-a526-8149092660ed-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.256801 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.256852 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50db8d67-b1c6-4165-a526-8149092660ed-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.257083 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50db8d67-b1c6-4165-a526-8149092660ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.259063 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50db8d67-b1c6-4165-a526-8149092660ed-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.259572 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50db8d67-b1c6-4165-a526-8149092660ed-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.259812 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50db8d67-b1c6-4165-a526-8149092660ed-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.263466 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50db8d67-b1c6-4165-a526-8149092660ed-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.265125 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50db8d67-b1c6-4165-a526-8149092660ed-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.270737 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzhv4\" (UniqueName: \"kubernetes.io/projected/50db8d67-b1c6-4165-a526-8149092660ed-kube-api-access-rzhv4\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.298298 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.346242 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.752875 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.836130 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-sqmk4" event={"ID":"6fb6a146-a614-4339-9e07-8892e82fed36","Type":"ContainerStarted","Data":"470dccda9196093045054ac03a70186e29764ae3f79636bb171a178c870f31e4"} Dec 05 19:21:53 crc kubenswrapper[4828]: I1205 19:21:53.837322 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e21a851c-5179-4365-8e72-5dea16be90cc","Type":"ContainerStarted","Data":"9d11b726f1b9f69adb93085f723671e565bce6d95b184041aae26d57b88c2fe7"} Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.040988 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.525362 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.526886 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.528929 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.528929 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.532758 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-4g5dh" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.534438 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.537298 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.540844 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.650585 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2debacb-a691-43ee-aa79-670bbec2a98a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a2debacb-a691-43ee-aa79-670bbec2a98a\") " pod="openstack/openstack-galera-0" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.650650 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2debacb-a691-43ee-aa79-670bbec2a98a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a2debacb-a691-43ee-aa79-670bbec2a98a\") " pod="openstack/openstack-galera-0" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.650671 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a2debacb-a691-43ee-aa79-670bbec2a98a-kolla-config\") pod \"openstack-galera-0\" (UID: \"a2debacb-a691-43ee-aa79-670bbec2a98a\") " pod="openstack/openstack-galera-0" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.650687 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2debacb-a691-43ee-aa79-670bbec2a98a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a2debacb-a691-43ee-aa79-670bbec2a98a\") " pod="openstack/openstack-galera-0" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.650736 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a2debacb-a691-43ee-aa79-670bbec2a98a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a2debacb-a691-43ee-aa79-670bbec2a98a\") " pod="openstack/openstack-galera-0" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.650769 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"a2debacb-a691-43ee-aa79-670bbec2a98a\") " pod="openstack/openstack-galera-0" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.650800 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t4qk\" (UniqueName: \"kubernetes.io/projected/a2debacb-a691-43ee-aa79-670bbec2a98a-kube-api-access-9t4qk\") pod \"openstack-galera-0\" (UID: \"a2debacb-a691-43ee-aa79-670bbec2a98a\") " pod="openstack/openstack-galera-0" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.650845 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a2debacb-a691-43ee-aa79-670bbec2a98a-config-data-default\") pod \"openstack-galera-0\" (UID: \"a2debacb-a691-43ee-aa79-670bbec2a98a\") " pod="openstack/openstack-galera-0" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.752212 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a2debacb-a691-43ee-aa79-670bbec2a98a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a2debacb-a691-43ee-aa79-670bbec2a98a\") " pod="openstack/openstack-galera-0" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.752275 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"a2debacb-a691-43ee-aa79-670bbec2a98a\") " pod="openstack/openstack-galera-0" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.752311 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t4qk\" (UniqueName: \"kubernetes.io/projected/a2debacb-a691-43ee-aa79-670bbec2a98a-kube-api-access-9t4qk\") pod \"openstack-galera-0\" (UID: \"a2debacb-a691-43ee-aa79-670bbec2a98a\") " pod="openstack/openstack-galera-0" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.752350 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a2debacb-a691-43ee-aa79-670bbec2a98a-config-data-default\") pod \"openstack-galera-0\" (UID: \"a2debacb-a691-43ee-aa79-670bbec2a98a\") " pod="openstack/openstack-galera-0" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.752408 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2debacb-a691-43ee-aa79-670bbec2a98a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a2debacb-a691-43ee-aa79-670bbec2a98a\") " pod="openstack/openstack-galera-0" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.752445 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2debacb-a691-43ee-aa79-670bbec2a98a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a2debacb-a691-43ee-aa79-670bbec2a98a\") " pod="openstack/openstack-galera-0" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.752472 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a2debacb-a691-43ee-aa79-670bbec2a98a-kolla-config\") pod \"openstack-galera-0\" (UID: \"a2debacb-a691-43ee-aa79-670bbec2a98a\") " pod="openstack/openstack-galera-0" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.752492 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2debacb-a691-43ee-aa79-670bbec2a98a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a2debacb-a691-43ee-aa79-670bbec2a98a\") " pod="openstack/openstack-galera-0" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.752711 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a2debacb-a691-43ee-aa79-670bbec2a98a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a2debacb-a691-43ee-aa79-670bbec2a98a\") " pod="openstack/openstack-galera-0" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.753078 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"a2debacb-a691-43ee-aa79-670bbec2a98a\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.753309 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a2debacb-a691-43ee-aa79-670bbec2a98a-config-data-default\") pod \"openstack-galera-0\" (UID: \"a2debacb-a691-43ee-aa79-670bbec2a98a\") " pod="openstack/openstack-galera-0" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.754080 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a2debacb-a691-43ee-aa79-670bbec2a98a-kolla-config\") pod \"openstack-galera-0\" (UID: \"a2debacb-a691-43ee-aa79-670bbec2a98a\") " pod="openstack/openstack-galera-0" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.754783 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2debacb-a691-43ee-aa79-670bbec2a98a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a2debacb-a691-43ee-aa79-670bbec2a98a\") " pod="openstack/openstack-galera-0" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.761222 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2debacb-a691-43ee-aa79-670bbec2a98a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a2debacb-a691-43ee-aa79-670bbec2a98a\") " pod="openstack/openstack-galera-0" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.774948 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t4qk\" (UniqueName: \"kubernetes.io/projected/a2debacb-a691-43ee-aa79-670bbec2a98a-kube-api-access-9t4qk\") pod \"openstack-galera-0\" (UID: \"a2debacb-a691-43ee-aa79-670bbec2a98a\") " pod="openstack/openstack-galera-0" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.777557 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2debacb-a691-43ee-aa79-670bbec2a98a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a2debacb-a691-43ee-aa79-670bbec2a98a\") " pod="openstack/openstack-galera-0" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.781568 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"a2debacb-a691-43ee-aa79-670bbec2a98a\") " pod="openstack/openstack-galera-0" Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.846189 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"50db8d67-b1c6-4165-a526-8149092660ed","Type":"ContainerStarted","Data":"6c7ad0e64c226782f30300bb540d0049375bbf6a23ae19ecbdc1ef787775e856"} Dec 05 19:21:54 crc kubenswrapper[4828]: I1205 19:21:54.846784 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 05 19:21:55 crc kubenswrapper[4828]: I1205 19:21:55.896579 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 05 19:21:55 crc kubenswrapper[4828]: I1205 19:21:55.899808 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 05 19:21:55 crc kubenswrapper[4828]: I1205 19:21:55.905029 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 05 19:21:55 crc kubenswrapper[4828]: I1205 19:21:55.905981 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 05 19:21:55 crc kubenswrapper[4828]: I1205 19:21:55.906169 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 05 19:21:55 crc kubenswrapper[4828]: I1205 19:21:55.908885 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-pwnbs" Dec 05 19:21:55 crc kubenswrapper[4828]: I1205 19:21:55.918345 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.072580 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7064b569-c206-4ed9-8f28-3e5a7e92bf79-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7064b569-c206-4ed9-8f28-3e5a7e92bf79\") " pod="openstack/openstack-cell1-galera-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.072652 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"7064b569-c206-4ed9-8f28-3e5a7e92bf79\") " pod="openstack/openstack-cell1-galera-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.072693 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7064b569-c206-4ed9-8f28-3e5a7e92bf79-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7064b569-c206-4ed9-8f28-3e5a7e92bf79\") " pod="openstack/openstack-cell1-galera-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.072722 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7064b569-c206-4ed9-8f28-3e5a7e92bf79-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7064b569-c206-4ed9-8f28-3e5a7e92bf79\") " pod="openstack/openstack-cell1-galera-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.072738 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7064b569-c206-4ed9-8f28-3e5a7e92bf79-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7064b569-c206-4ed9-8f28-3e5a7e92bf79\") " pod="openstack/openstack-cell1-galera-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.072772 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7064b569-c206-4ed9-8f28-3e5a7e92bf79-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7064b569-c206-4ed9-8f28-3e5a7e92bf79\") " pod="openstack/openstack-cell1-galera-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.072796 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7064b569-c206-4ed9-8f28-3e5a7e92bf79-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7064b569-c206-4ed9-8f28-3e5a7e92bf79\") " pod="openstack/openstack-cell1-galera-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.072841 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w96j4\" (UniqueName: \"kubernetes.io/projected/7064b569-c206-4ed9-8f28-3e5a7e92bf79-kube-api-access-w96j4\") pod \"openstack-cell1-galera-0\" (UID: \"7064b569-c206-4ed9-8f28-3e5a7e92bf79\") " pod="openstack/openstack-cell1-galera-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.174467 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7064b569-c206-4ed9-8f28-3e5a7e92bf79-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7064b569-c206-4ed9-8f28-3e5a7e92bf79\") " pod="openstack/openstack-cell1-galera-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.174536 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7064b569-c206-4ed9-8f28-3e5a7e92bf79-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7064b569-c206-4ed9-8f28-3e5a7e92bf79\") " pod="openstack/openstack-cell1-galera-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.174560 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7064b569-c206-4ed9-8f28-3e5a7e92bf79-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7064b569-c206-4ed9-8f28-3e5a7e92bf79\") " pod="openstack/openstack-cell1-galera-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.174608 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7064b569-c206-4ed9-8f28-3e5a7e92bf79-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7064b569-c206-4ed9-8f28-3e5a7e92bf79\") " pod="openstack/openstack-cell1-galera-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.175095 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7064b569-c206-4ed9-8f28-3e5a7e92bf79-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7064b569-c206-4ed9-8f28-3e5a7e92bf79\") " pod="openstack/openstack-cell1-galera-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.175422 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7064b569-c206-4ed9-8f28-3e5a7e92bf79-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7064b569-c206-4ed9-8f28-3e5a7e92bf79\") " pod="openstack/openstack-cell1-galera-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.175621 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7064b569-c206-4ed9-8f28-3e5a7e92bf79-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7064b569-c206-4ed9-8f28-3e5a7e92bf79\") " pod="openstack/openstack-cell1-galera-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.175668 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w96j4\" (UniqueName: \"kubernetes.io/projected/7064b569-c206-4ed9-8f28-3e5a7e92bf79-kube-api-access-w96j4\") pod \"openstack-cell1-galera-0\" (UID: \"7064b569-c206-4ed9-8f28-3e5a7e92bf79\") " pod="openstack/openstack-cell1-galera-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.175713 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7064b569-c206-4ed9-8f28-3e5a7e92bf79-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7064b569-c206-4ed9-8f28-3e5a7e92bf79\") " pod="openstack/openstack-cell1-galera-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.175768 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"7064b569-c206-4ed9-8f28-3e5a7e92bf79\") " pod="openstack/openstack-cell1-galera-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.176053 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"7064b569-c206-4ed9-8f28-3e5a7e92bf79\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.176241 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7064b569-c206-4ed9-8f28-3e5a7e92bf79-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7064b569-c206-4ed9-8f28-3e5a7e92bf79\") " pod="openstack/openstack-cell1-galera-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.176538 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7064b569-c206-4ed9-8f28-3e5a7e92bf79-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7064b569-c206-4ed9-8f28-3e5a7e92bf79\") " pod="openstack/openstack-cell1-galera-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.179275 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7064b569-c206-4ed9-8f28-3e5a7e92bf79-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7064b569-c206-4ed9-8f28-3e5a7e92bf79\") " pod="openstack/openstack-cell1-galera-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.194514 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7064b569-c206-4ed9-8f28-3e5a7e92bf79-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7064b569-c206-4ed9-8f28-3e5a7e92bf79\") " pod="openstack/openstack-cell1-galera-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.205093 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w96j4\" (UniqueName: \"kubernetes.io/projected/7064b569-c206-4ed9-8f28-3e5a7e92bf79-kube-api-access-w96j4\") pod \"openstack-cell1-galera-0\" (UID: \"7064b569-c206-4ed9-8f28-3e5a7e92bf79\") " pod="openstack/openstack-cell1-galera-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.210121 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"7064b569-c206-4ed9-8f28-3e5a7e92bf79\") " pod="openstack/openstack-cell1-galera-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.238154 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.366455 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.367731 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.375904 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.376030 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-lw7bw" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.376223 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.395627 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.479406 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93282807-6c59-42db-9235-8b2097a8f7a9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"93282807-6c59-42db-9235-8b2097a8f7a9\") " pod="openstack/memcached-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.479468 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q5st\" (UniqueName: \"kubernetes.io/projected/93282807-6c59-42db-9235-8b2097a8f7a9-kube-api-access-5q5st\") pod \"memcached-0\" (UID: \"93282807-6c59-42db-9235-8b2097a8f7a9\") " pod="openstack/memcached-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.479504 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/93282807-6c59-42db-9235-8b2097a8f7a9-config-data\") pod \"memcached-0\" (UID: \"93282807-6c59-42db-9235-8b2097a8f7a9\") " pod="openstack/memcached-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.479533 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/93282807-6c59-42db-9235-8b2097a8f7a9-kolla-config\") pod \"memcached-0\" (UID: \"93282807-6c59-42db-9235-8b2097a8f7a9\") " pod="openstack/memcached-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.479689 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/93282807-6c59-42db-9235-8b2097a8f7a9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"93282807-6c59-42db-9235-8b2097a8f7a9\") " pod="openstack/memcached-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.581452 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/93282807-6c59-42db-9235-8b2097a8f7a9-config-data\") pod \"memcached-0\" (UID: \"93282807-6c59-42db-9235-8b2097a8f7a9\") " pod="openstack/memcached-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.581519 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/93282807-6c59-42db-9235-8b2097a8f7a9-kolla-config\") pod \"memcached-0\" (UID: \"93282807-6c59-42db-9235-8b2097a8f7a9\") " pod="openstack/memcached-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.581566 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/93282807-6c59-42db-9235-8b2097a8f7a9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"93282807-6c59-42db-9235-8b2097a8f7a9\") " pod="openstack/memcached-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.581706 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93282807-6c59-42db-9235-8b2097a8f7a9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"93282807-6c59-42db-9235-8b2097a8f7a9\") " pod="openstack/memcached-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.581726 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q5st\" (UniqueName: \"kubernetes.io/projected/93282807-6c59-42db-9235-8b2097a8f7a9-kube-api-access-5q5st\") pod \"memcached-0\" (UID: \"93282807-6c59-42db-9235-8b2097a8f7a9\") " pod="openstack/memcached-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.582224 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/93282807-6c59-42db-9235-8b2097a8f7a9-kolla-config\") pod \"memcached-0\" (UID: \"93282807-6c59-42db-9235-8b2097a8f7a9\") " pod="openstack/memcached-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.583038 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/93282807-6c59-42db-9235-8b2097a8f7a9-config-data\") pod \"memcached-0\" (UID: \"93282807-6c59-42db-9235-8b2097a8f7a9\") " pod="openstack/memcached-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.598448 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/93282807-6c59-42db-9235-8b2097a8f7a9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"93282807-6c59-42db-9235-8b2097a8f7a9\") " pod="openstack/memcached-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.601258 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q5st\" (UniqueName: \"kubernetes.io/projected/93282807-6c59-42db-9235-8b2097a8f7a9-kube-api-access-5q5st\") pod \"memcached-0\" (UID: \"93282807-6c59-42db-9235-8b2097a8f7a9\") " pod="openstack/memcached-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.601993 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93282807-6c59-42db-9235-8b2097a8f7a9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"93282807-6c59-42db-9235-8b2097a8f7a9\") " pod="openstack/memcached-0" Dec 05 19:21:56 crc kubenswrapper[4828]: I1205 19:21:56.684547 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 05 19:21:58 crc kubenswrapper[4828]: I1205 19:21:58.440207 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 19:21:58 crc kubenswrapper[4828]: I1205 19:21:58.442885 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 19:21:58 crc kubenswrapper[4828]: I1205 19:21:58.445320 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-tcrd2" Dec 05 19:21:58 crc kubenswrapper[4828]: I1205 19:21:58.459171 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 19:21:58 crc kubenswrapper[4828]: I1205 19:21:58.636726 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m57s4\" (UniqueName: \"kubernetes.io/projected/55a74937-57cf-442b-a1df-16a9df3b7948-kube-api-access-m57s4\") pod \"kube-state-metrics-0\" (UID: \"55a74937-57cf-442b-a1df-16a9df3b7948\") " pod="openstack/kube-state-metrics-0" Dec 05 19:21:58 crc kubenswrapper[4828]: I1205 19:21:58.738422 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m57s4\" (UniqueName: \"kubernetes.io/projected/55a74937-57cf-442b-a1df-16a9df3b7948-kube-api-access-m57s4\") pod \"kube-state-metrics-0\" (UID: \"55a74937-57cf-442b-a1df-16a9df3b7948\") " pod="openstack/kube-state-metrics-0" Dec 05 19:21:58 crc kubenswrapper[4828]: I1205 19:21:58.768191 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m57s4\" (UniqueName: \"kubernetes.io/projected/55a74937-57cf-442b-a1df-16a9df3b7948-kube-api-access-m57s4\") pod \"kube-state-metrics-0\" (UID: \"55a74937-57cf-442b-a1df-16a9df3b7948\") " pod="openstack/kube-state-metrics-0" Dec 05 19:21:59 crc kubenswrapper[4828]: I1205 19:21:59.065119 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.174026 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.176009 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.179639 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-n44mz" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.180024 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.180113 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.181360 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.183272 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.191030 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.239509 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ac00d92-7825-4462-ab12-8d2059085d24-config\") pod \"ovsdbserver-nb-0\" (UID: \"7ac00d92-7825-4462-ab12-8d2059085d24\") " pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.239593 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ac00d92-7825-4462-ab12-8d2059085d24-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7ac00d92-7825-4462-ab12-8d2059085d24\") " pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.239757 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ac00d92-7825-4462-ab12-8d2059085d24-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7ac00d92-7825-4462-ab12-8d2059085d24\") " pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.239895 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ac00d92-7825-4462-ab12-8d2059085d24-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7ac00d92-7825-4462-ab12-8d2059085d24\") " pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.239940 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ac00d92-7825-4462-ab12-8d2059085d24-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7ac00d92-7825-4462-ab12-8d2059085d24\") " pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.240023 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzvzq\" (UniqueName: \"kubernetes.io/projected/7ac00d92-7825-4462-ab12-8d2059085d24-kube-api-access-pzvzq\") pod \"ovsdbserver-nb-0\" (UID: \"7ac00d92-7825-4462-ab12-8d2059085d24\") " pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.240073 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7ac00d92-7825-4462-ab12-8d2059085d24-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7ac00d92-7825-4462-ab12-8d2059085d24\") " pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.240299 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7ac00d92-7825-4462-ab12-8d2059085d24\") " pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.342043 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ac00d92-7825-4462-ab12-8d2059085d24-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7ac00d92-7825-4462-ab12-8d2059085d24\") " pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.342089 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ac00d92-7825-4462-ab12-8d2059085d24-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7ac00d92-7825-4462-ab12-8d2059085d24\") " pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.342127 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzvzq\" (UniqueName: \"kubernetes.io/projected/7ac00d92-7825-4462-ab12-8d2059085d24-kube-api-access-pzvzq\") pod \"ovsdbserver-nb-0\" (UID: \"7ac00d92-7825-4462-ab12-8d2059085d24\") " pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.342152 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7ac00d92-7825-4462-ab12-8d2059085d24-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7ac00d92-7825-4462-ab12-8d2059085d24\") " pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.342196 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7ac00d92-7825-4462-ab12-8d2059085d24\") " pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.342220 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ac00d92-7825-4462-ab12-8d2059085d24-config\") pod \"ovsdbserver-nb-0\" (UID: \"7ac00d92-7825-4462-ab12-8d2059085d24\") " pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.342243 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ac00d92-7825-4462-ab12-8d2059085d24-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7ac00d92-7825-4462-ab12-8d2059085d24\") " pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.342271 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ac00d92-7825-4462-ab12-8d2059085d24-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7ac00d92-7825-4462-ab12-8d2059085d24\") " pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.343223 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7ac00d92-7825-4462-ab12-8d2059085d24\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.343712 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7ac00d92-7825-4462-ab12-8d2059085d24-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7ac00d92-7825-4462-ab12-8d2059085d24\") " pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.344289 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ac00d92-7825-4462-ab12-8d2059085d24-config\") pod \"ovsdbserver-nb-0\" (UID: \"7ac00d92-7825-4462-ab12-8d2059085d24\") " pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.344534 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ac00d92-7825-4462-ab12-8d2059085d24-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7ac00d92-7825-4462-ab12-8d2059085d24\") " pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.348341 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ac00d92-7825-4462-ab12-8d2059085d24-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7ac00d92-7825-4462-ab12-8d2059085d24\") " pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.348947 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ac00d92-7825-4462-ab12-8d2059085d24-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7ac00d92-7825-4462-ab12-8d2059085d24\") " pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.350275 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ac00d92-7825-4462-ab12-8d2059085d24-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7ac00d92-7825-4462-ab12-8d2059085d24\") " pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.362592 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzvzq\" (UniqueName: \"kubernetes.io/projected/7ac00d92-7825-4462-ab12-8d2059085d24-kube-api-access-pzvzq\") pod \"ovsdbserver-nb-0\" (UID: \"7ac00d92-7825-4462-ab12-8d2059085d24\") " pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.379463 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7ac00d92-7825-4462-ab12-8d2059085d24\") " pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.512401 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.715362 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-s6jdb"] Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.716363 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s6jdb" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.718562 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-t2q26" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.718606 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.718804 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.728598 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-l467t"] Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.730510 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-l467t" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.745635 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s6jdb"] Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.747552 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3b912679-3c5e-4511-8769-8b8b4923d9fd-etc-ovs\") pod \"ovn-controller-ovs-l467t\" (UID: \"3b912679-3c5e-4511-8769-8b8b4923d9fd\") " pod="openstack/ovn-controller-ovs-l467t" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.747606 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjdmv\" (UniqueName: \"kubernetes.io/projected/f88a4161-1271-4374-9740-eaea879d6561-kube-api-access-xjdmv\") pod \"ovn-controller-s6jdb\" (UID: \"f88a4161-1271-4374-9740-eaea879d6561\") " pod="openstack/ovn-controller-s6jdb" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.747785 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f88a4161-1271-4374-9740-eaea879d6561-var-log-ovn\") pod \"ovn-controller-s6jdb\" (UID: \"f88a4161-1271-4374-9740-eaea879d6561\") " pod="openstack/ovn-controller-s6jdb" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.747863 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f88a4161-1271-4374-9740-eaea879d6561-var-run-ovn\") pod \"ovn-controller-s6jdb\" (UID: \"f88a4161-1271-4374-9740-eaea879d6561\") " pod="openstack/ovn-controller-s6jdb" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.747888 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f88a4161-1271-4374-9740-eaea879d6561-ovn-controller-tls-certs\") pod \"ovn-controller-s6jdb\" (UID: \"f88a4161-1271-4374-9740-eaea879d6561\") " pod="openstack/ovn-controller-s6jdb" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.747927 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3b912679-3c5e-4511-8769-8b8b4923d9fd-var-run\") pod \"ovn-controller-ovs-l467t\" (UID: \"3b912679-3c5e-4511-8769-8b8b4923d9fd\") " pod="openstack/ovn-controller-ovs-l467t" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.748052 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3b912679-3c5e-4511-8769-8b8b4923d9fd-var-log\") pod \"ovn-controller-ovs-l467t\" (UID: \"3b912679-3c5e-4511-8769-8b8b4923d9fd\") " pod="openstack/ovn-controller-ovs-l467t" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.748102 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcb65\" (UniqueName: \"kubernetes.io/projected/3b912679-3c5e-4511-8769-8b8b4923d9fd-kube-api-access-gcb65\") pod \"ovn-controller-ovs-l467t\" (UID: \"3b912679-3c5e-4511-8769-8b8b4923d9fd\") " pod="openstack/ovn-controller-ovs-l467t" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.748125 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b912679-3c5e-4511-8769-8b8b4923d9fd-scripts\") pod \"ovn-controller-ovs-l467t\" (UID: \"3b912679-3c5e-4511-8769-8b8b4923d9fd\") " pod="openstack/ovn-controller-ovs-l467t" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.748181 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f88a4161-1271-4374-9740-eaea879d6561-var-run\") pod \"ovn-controller-s6jdb\" (UID: \"f88a4161-1271-4374-9740-eaea879d6561\") " pod="openstack/ovn-controller-s6jdb" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.748268 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3b912679-3c5e-4511-8769-8b8b4923d9fd-var-lib\") pod \"ovn-controller-ovs-l467t\" (UID: \"3b912679-3c5e-4511-8769-8b8b4923d9fd\") " pod="openstack/ovn-controller-ovs-l467t" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.748340 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f88a4161-1271-4374-9740-eaea879d6561-scripts\") pod \"ovn-controller-s6jdb\" (UID: \"f88a4161-1271-4374-9740-eaea879d6561\") " pod="openstack/ovn-controller-s6jdb" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.748389 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f88a4161-1271-4374-9740-eaea879d6561-combined-ca-bundle\") pod \"ovn-controller-s6jdb\" (UID: \"f88a4161-1271-4374-9740-eaea879d6561\") " pod="openstack/ovn-controller-s6jdb" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.756954 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-l467t"] Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.850295 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3b912679-3c5e-4511-8769-8b8b4923d9fd-var-run\") pod \"ovn-controller-ovs-l467t\" (UID: \"3b912679-3c5e-4511-8769-8b8b4923d9fd\") " pod="openstack/ovn-controller-ovs-l467t" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.850357 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3b912679-3c5e-4511-8769-8b8b4923d9fd-var-log\") pod \"ovn-controller-ovs-l467t\" (UID: \"3b912679-3c5e-4511-8769-8b8b4923d9fd\") " pod="openstack/ovn-controller-ovs-l467t" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.850375 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcb65\" (UniqueName: \"kubernetes.io/projected/3b912679-3c5e-4511-8769-8b8b4923d9fd-kube-api-access-gcb65\") pod \"ovn-controller-ovs-l467t\" (UID: \"3b912679-3c5e-4511-8769-8b8b4923d9fd\") " pod="openstack/ovn-controller-ovs-l467t" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.850402 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b912679-3c5e-4511-8769-8b8b4923d9fd-scripts\") pod \"ovn-controller-ovs-l467t\" (UID: \"3b912679-3c5e-4511-8769-8b8b4923d9fd\") " pod="openstack/ovn-controller-ovs-l467t" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.850431 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f88a4161-1271-4374-9740-eaea879d6561-var-run\") pod \"ovn-controller-s6jdb\" (UID: \"f88a4161-1271-4374-9740-eaea879d6561\") " pod="openstack/ovn-controller-s6jdb" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.850449 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3b912679-3c5e-4511-8769-8b8b4923d9fd-var-lib\") pod \"ovn-controller-ovs-l467t\" (UID: \"3b912679-3c5e-4511-8769-8b8b4923d9fd\") " pod="openstack/ovn-controller-ovs-l467t" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.850480 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f88a4161-1271-4374-9740-eaea879d6561-scripts\") pod \"ovn-controller-s6jdb\" (UID: \"f88a4161-1271-4374-9740-eaea879d6561\") " pod="openstack/ovn-controller-s6jdb" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.850504 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f88a4161-1271-4374-9740-eaea879d6561-combined-ca-bundle\") pod \"ovn-controller-s6jdb\" (UID: \"f88a4161-1271-4374-9740-eaea879d6561\") " pod="openstack/ovn-controller-s6jdb" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.850545 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3b912679-3c5e-4511-8769-8b8b4923d9fd-etc-ovs\") pod \"ovn-controller-ovs-l467t\" (UID: \"3b912679-3c5e-4511-8769-8b8b4923d9fd\") " pod="openstack/ovn-controller-ovs-l467t" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.850568 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjdmv\" (UniqueName: \"kubernetes.io/projected/f88a4161-1271-4374-9740-eaea879d6561-kube-api-access-xjdmv\") pod \"ovn-controller-s6jdb\" (UID: \"f88a4161-1271-4374-9740-eaea879d6561\") " pod="openstack/ovn-controller-s6jdb" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.850586 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f88a4161-1271-4374-9740-eaea879d6561-var-log-ovn\") pod \"ovn-controller-s6jdb\" (UID: \"f88a4161-1271-4374-9740-eaea879d6561\") " pod="openstack/ovn-controller-s6jdb" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.850601 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f88a4161-1271-4374-9740-eaea879d6561-var-run-ovn\") pod \"ovn-controller-s6jdb\" (UID: \"f88a4161-1271-4374-9740-eaea879d6561\") " pod="openstack/ovn-controller-s6jdb" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.850615 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f88a4161-1271-4374-9740-eaea879d6561-ovn-controller-tls-certs\") pod \"ovn-controller-s6jdb\" (UID: \"f88a4161-1271-4374-9740-eaea879d6561\") " pod="openstack/ovn-controller-s6jdb" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.850786 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3b912679-3c5e-4511-8769-8b8b4923d9fd-var-run\") pod \"ovn-controller-ovs-l467t\" (UID: \"3b912679-3c5e-4511-8769-8b8b4923d9fd\") " pod="openstack/ovn-controller-ovs-l467t" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.851038 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f88a4161-1271-4374-9740-eaea879d6561-var-run\") pod \"ovn-controller-s6jdb\" (UID: \"f88a4161-1271-4374-9740-eaea879d6561\") " pod="openstack/ovn-controller-s6jdb" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.851230 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3b912679-3c5e-4511-8769-8b8b4923d9fd-etc-ovs\") pod \"ovn-controller-ovs-l467t\" (UID: \"3b912679-3c5e-4511-8769-8b8b4923d9fd\") " pod="openstack/ovn-controller-ovs-l467t" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.851478 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3b912679-3c5e-4511-8769-8b8b4923d9fd-var-lib\") pod \"ovn-controller-ovs-l467t\" (UID: \"3b912679-3c5e-4511-8769-8b8b4923d9fd\") " pod="openstack/ovn-controller-ovs-l467t" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.851634 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f88a4161-1271-4374-9740-eaea879d6561-var-log-ovn\") pod \"ovn-controller-s6jdb\" (UID: \"f88a4161-1271-4374-9740-eaea879d6561\") " pod="openstack/ovn-controller-s6jdb" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.851679 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f88a4161-1271-4374-9740-eaea879d6561-var-run-ovn\") pod \"ovn-controller-s6jdb\" (UID: \"f88a4161-1271-4374-9740-eaea879d6561\") " pod="openstack/ovn-controller-s6jdb" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.851681 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3b912679-3c5e-4511-8769-8b8b4923d9fd-var-log\") pod \"ovn-controller-ovs-l467t\" (UID: \"3b912679-3c5e-4511-8769-8b8b4923d9fd\") " pod="openstack/ovn-controller-ovs-l467t" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.852945 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b912679-3c5e-4511-8769-8b8b4923d9fd-scripts\") pod \"ovn-controller-ovs-l467t\" (UID: \"3b912679-3c5e-4511-8769-8b8b4923d9fd\") " pod="openstack/ovn-controller-ovs-l467t" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.853232 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f88a4161-1271-4374-9740-eaea879d6561-scripts\") pod \"ovn-controller-s6jdb\" (UID: \"f88a4161-1271-4374-9740-eaea879d6561\") " pod="openstack/ovn-controller-s6jdb" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.853770 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f88a4161-1271-4374-9740-eaea879d6561-combined-ca-bundle\") pod \"ovn-controller-s6jdb\" (UID: \"f88a4161-1271-4374-9740-eaea879d6561\") " pod="openstack/ovn-controller-s6jdb" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.856944 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f88a4161-1271-4374-9740-eaea879d6561-ovn-controller-tls-certs\") pod \"ovn-controller-s6jdb\" (UID: \"f88a4161-1271-4374-9740-eaea879d6561\") " pod="openstack/ovn-controller-s6jdb" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.870614 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjdmv\" (UniqueName: \"kubernetes.io/projected/f88a4161-1271-4374-9740-eaea879d6561-kube-api-access-xjdmv\") pod \"ovn-controller-s6jdb\" (UID: \"f88a4161-1271-4374-9740-eaea879d6561\") " pod="openstack/ovn-controller-s6jdb" Dec 05 19:22:02 crc kubenswrapper[4828]: I1205 19:22:02.871358 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcb65\" (UniqueName: \"kubernetes.io/projected/3b912679-3c5e-4511-8769-8b8b4923d9fd-kube-api-access-gcb65\") pod \"ovn-controller-ovs-l467t\" (UID: \"3b912679-3c5e-4511-8769-8b8b4923d9fd\") " pod="openstack/ovn-controller-ovs-l467t" Dec 05 19:22:03 crc kubenswrapper[4828]: I1205 19:22:03.068506 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s6jdb" Dec 05 19:22:03 crc kubenswrapper[4828]: I1205 19:22:03.080541 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-l467t" Dec 05 19:22:05 crc kubenswrapper[4828]: I1205 19:22:05.259294 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:22:05 crc kubenswrapper[4828]: I1205 19:22:05.259603 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.112839 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.114483 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.117136 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.117389 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.117569 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-sdx62" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.118067 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.126175 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.203427 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"31b675bd-ec74-4876-91a0-95e4180e8cab\") " pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.203467 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31b675bd-ec74-4876-91a0-95e4180e8cab-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"31b675bd-ec74-4876-91a0-95e4180e8cab\") " pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.203495 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31b675bd-ec74-4876-91a0-95e4180e8cab-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"31b675bd-ec74-4876-91a0-95e4180e8cab\") " pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.203531 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/31b675bd-ec74-4876-91a0-95e4180e8cab-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"31b675bd-ec74-4876-91a0-95e4180e8cab\") " pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.203551 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/31b675bd-ec74-4876-91a0-95e4180e8cab-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"31b675bd-ec74-4876-91a0-95e4180e8cab\") " pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.203576 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/31b675bd-ec74-4876-91a0-95e4180e8cab-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"31b675bd-ec74-4876-91a0-95e4180e8cab\") " pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.203617 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2rv5\" (UniqueName: \"kubernetes.io/projected/31b675bd-ec74-4876-91a0-95e4180e8cab-kube-api-access-v2rv5\") pod \"ovsdbserver-sb-0\" (UID: \"31b675bd-ec74-4876-91a0-95e4180e8cab\") " pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.203643 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31b675bd-ec74-4876-91a0-95e4180e8cab-config\") pod \"ovsdbserver-sb-0\" (UID: \"31b675bd-ec74-4876-91a0-95e4180e8cab\") " pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.304228 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31b675bd-ec74-4876-91a0-95e4180e8cab-config\") pod \"ovsdbserver-sb-0\" (UID: \"31b675bd-ec74-4876-91a0-95e4180e8cab\") " pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.304278 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"31b675bd-ec74-4876-91a0-95e4180e8cab\") " pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.304299 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31b675bd-ec74-4876-91a0-95e4180e8cab-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"31b675bd-ec74-4876-91a0-95e4180e8cab\") " pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.304599 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"31b675bd-ec74-4876-91a0-95e4180e8cab\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.305209 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31b675bd-ec74-4876-91a0-95e4180e8cab-config\") pod \"ovsdbserver-sb-0\" (UID: \"31b675bd-ec74-4876-91a0-95e4180e8cab\") " pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.306180 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31b675bd-ec74-4876-91a0-95e4180e8cab-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"31b675bd-ec74-4876-91a0-95e4180e8cab\") " pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.306228 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/31b675bd-ec74-4876-91a0-95e4180e8cab-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"31b675bd-ec74-4876-91a0-95e4180e8cab\") " pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.306246 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/31b675bd-ec74-4876-91a0-95e4180e8cab-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"31b675bd-ec74-4876-91a0-95e4180e8cab\") " pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.306310 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/31b675bd-ec74-4876-91a0-95e4180e8cab-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"31b675bd-ec74-4876-91a0-95e4180e8cab\") " pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.306353 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2rv5\" (UniqueName: \"kubernetes.io/projected/31b675bd-ec74-4876-91a0-95e4180e8cab-kube-api-access-v2rv5\") pod \"ovsdbserver-sb-0\" (UID: \"31b675bd-ec74-4876-91a0-95e4180e8cab\") " pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.306816 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/31b675bd-ec74-4876-91a0-95e4180e8cab-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"31b675bd-ec74-4876-91a0-95e4180e8cab\") " pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.307294 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31b675bd-ec74-4876-91a0-95e4180e8cab-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"31b675bd-ec74-4876-91a0-95e4180e8cab\") " pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.324108 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31b675bd-ec74-4876-91a0-95e4180e8cab-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"31b675bd-ec74-4876-91a0-95e4180e8cab\") " pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.324936 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/31b675bd-ec74-4876-91a0-95e4180e8cab-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"31b675bd-ec74-4876-91a0-95e4180e8cab\") " pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.325655 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/31b675bd-ec74-4876-91a0-95e4180e8cab-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"31b675bd-ec74-4876-91a0-95e4180e8cab\") " pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.326605 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2rv5\" (UniqueName: \"kubernetes.io/projected/31b675bd-ec74-4876-91a0-95e4180e8cab-kube-api-access-v2rv5\") pod \"ovsdbserver-sb-0\" (UID: \"31b675bd-ec74-4876-91a0-95e4180e8cab\") " pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.343521 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"31b675bd-ec74-4876-91a0-95e4180e8cab\") " pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:06 crc kubenswrapper[4828]: I1205 19:22:06.432004 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:10 crc kubenswrapper[4828]: E1205 19:22:10.765746 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 05 19:22:10 crc kubenswrapper[4828]: E1205 19:22:10.766488 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-57czk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-drq5x_openstack(3d51fed5-9f28-495a-8a60-223fa1409caf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 19:22:10 crc kubenswrapper[4828]: E1205 19:22:10.767650 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-drq5x" podUID="3d51fed5-9f28-495a-8a60-223fa1409caf" Dec 05 19:22:10 crc kubenswrapper[4828]: E1205 19:22:10.868382 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 05 19:22:10 crc kubenswrapper[4828]: E1205 19:22:10.868603 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5sh9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-6glxp_openstack(621bc79a-9f33-4dbe-94e4-74cc81382cbb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 19:22:10 crc kubenswrapper[4828]: E1205 19:22:10.869901 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-6glxp" podUID="621bc79a-9f33-4dbe-94e4-74cc81382cbb" Dec 05 19:22:12 crc kubenswrapper[4828]: I1205 19:22:12.404900 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6glxp" Dec 05 19:22:12 crc kubenswrapper[4828]: I1205 19:22:12.413394 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-drq5x" Dec 05 19:22:12 crc kubenswrapper[4828]: I1205 19:22:12.534949 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57czk\" (UniqueName: \"kubernetes.io/projected/3d51fed5-9f28-495a-8a60-223fa1409caf-kube-api-access-57czk\") pod \"3d51fed5-9f28-495a-8a60-223fa1409caf\" (UID: \"3d51fed5-9f28-495a-8a60-223fa1409caf\") " Dec 05 19:22:12 crc kubenswrapper[4828]: I1205 19:22:12.535293 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/621bc79a-9f33-4dbe-94e4-74cc81382cbb-config\") pod \"621bc79a-9f33-4dbe-94e4-74cc81382cbb\" (UID: \"621bc79a-9f33-4dbe-94e4-74cc81382cbb\") " Dec 05 19:22:12 crc kubenswrapper[4828]: I1205 19:22:12.535376 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d51fed5-9f28-495a-8a60-223fa1409caf-config\") pod \"3d51fed5-9f28-495a-8a60-223fa1409caf\" (UID: \"3d51fed5-9f28-495a-8a60-223fa1409caf\") " Dec 05 19:22:12 crc kubenswrapper[4828]: I1205 19:22:12.535405 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sh9p\" (UniqueName: \"kubernetes.io/projected/621bc79a-9f33-4dbe-94e4-74cc81382cbb-kube-api-access-5sh9p\") pod \"621bc79a-9f33-4dbe-94e4-74cc81382cbb\" (UID: \"621bc79a-9f33-4dbe-94e4-74cc81382cbb\") " Dec 05 19:22:12 crc kubenswrapper[4828]: I1205 19:22:12.535452 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d51fed5-9f28-495a-8a60-223fa1409caf-dns-svc\") pod \"3d51fed5-9f28-495a-8a60-223fa1409caf\" (UID: \"3d51fed5-9f28-495a-8a60-223fa1409caf\") " Dec 05 19:22:12 crc kubenswrapper[4828]: I1205 19:22:12.535933 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/621bc79a-9f33-4dbe-94e4-74cc81382cbb-config" (OuterVolumeSpecName: "config") pod "621bc79a-9f33-4dbe-94e4-74cc81382cbb" (UID: "621bc79a-9f33-4dbe-94e4-74cc81382cbb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:22:12 crc kubenswrapper[4828]: I1205 19:22:12.537177 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d51fed5-9f28-495a-8a60-223fa1409caf-config" (OuterVolumeSpecName: "config") pod "3d51fed5-9f28-495a-8a60-223fa1409caf" (UID: "3d51fed5-9f28-495a-8a60-223fa1409caf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:22:12 crc kubenswrapper[4828]: I1205 19:22:12.538303 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d51fed5-9f28-495a-8a60-223fa1409caf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3d51fed5-9f28-495a-8a60-223fa1409caf" (UID: "3d51fed5-9f28-495a-8a60-223fa1409caf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:22:12 crc kubenswrapper[4828]: I1205 19:22:12.538395 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d51fed5-9f28-495a-8a60-223fa1409caf-kube-api-access-57czk" (OuterVolumeSpecName: "kube-api-access-57czk") pod "3d51fed5-9f28-495a-8a60-223fa1409caf" (UID: "3d51fed5-9f28-495a-8a60-223fa1409caf"). InnerVolumeSpecName "kube-api-access-57czk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:22:12 crc kubenswrapper[4828]: I1205 19:22:12.541036 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/621bc79a-9f33-4dbe-94e4-74cc81382cbb-kube-api-access-5sh9p" (OuterVolumeSpecName: "kube-api-access-5sh9p") pod "621bc79a-9f33-4dbe-94e4-74cc81382cbb" (UID: "621bc79a-9f33-4dbe-94e4-74cc81382cbb"). InnerVolumeSpecName "kube-api-access-5sh9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:22:12 crc kubenswrapper[4828]: I1205 19:22:12.638927 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/621bc79a-9f33-4dbe-94e4-74cc81382cbb-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:12 crc kubenswrapper[4828]: I1205 19:22:12.638993 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d51fed5-9f28-495a-8a60-223fa1409caf-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:12 crc kubenswrapper[4828]: I1205 19:22:12.639006 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sh9p\" (UniqueName: \"kubernetes.io/projected/621bc79a-9f33-4dbe-94e4-74cc81382cbb-kube-api-access-5sh9p\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:12 crc kubenswrapper[4828]: I1205 19:22:12.639018 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d51fed5-9f28-495a-8a60-223fa1409caf-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:12 crc kubenswrapper[4828]: I1205 19:22:12.639028 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57czk\" (UniqueName: \"kubernetes.io/projected/3d51fed5-9f28-495a-8a60-223fa1409caf-kube-api-access-57czk\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:12 crc kubenswrapper[4828]: I1205 19:22:12.660281 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 19:22:12 crc kubenswrapper[4828]: I1205 19:22:12.797702 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 05 19:22:12 crc kubenswrapper[4828]: W1205 19:22:12.885310 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7064b569_c206_4ed9_8f28_3e5a7e92bf79.slice/crio-8e2eb7ab245f1e794049665db92df42ce7f66a63ff27555d3f240a86ea69f32f WatchSource:0}: Error finding container 8e2eb7ab245f1e794049665db92df42ce7f66a63ff27555d3f240a86ea69f32f: Status 404 returned error can't find the container with id 8e2eb7ab245f1e794049665db92df42ce7f66a63ff27555d3f240a86ea69f32f Dec 05 19:22:13 crc kubenswrapper[4828]: I1205 19:22:13.028514 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7064b569-c206-4ed9-8f28-3e5a7e92bf79","Type":"ContainerStarted","Data":"8e2eb7ab245f1e794049665db92df42ce7f66a63ff27555d3f240a86ea69f32f"} Dec 05 19:22:13 crc kubenswrapper[4828]: I1205 19:22:13.034604 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"55a74937-57cf-442b-a1df-16a9df3b7948","Type":"ContainerStarted","Data":"1feedb9eee9f148f6bfb1d22f2ffaa17fb94ff07d4de944e00e649644aa9ac3d"} Dec 05 19:22:13 crc kubenswrapper[4828]: I1205 19:22:13.035663 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-drq5x" event={"ID":"3d51fed5-9f28-495a-8a60-223fa1409caf","Type":"ContainerDied","Data":"35db591772c7100f8a0868028655444b47eb50a7ca48c164603c96376604fda6"} Dec 05 19:22:13 crc kubenswrapper[4828]: I1205 19:22:13.035728 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-drq5x" Dec 05 19:22:13 crc kubenswrapper[4828]: I1205 19:22:13.037999 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-6glxp" event={"ID":"621bc79a-9f33-4dbe-94e4-74cc81382cbb","Type":"ContainerDied","Data":"d2bfc026b777c92c87ef4557d37a2a54f2fb1a8de4c5d4e08adcc2ca96037a80"} Dec 05 19:22:13 crc kubenswrapper[4828]: I1205 19:22:13.038165 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6glxp" Dec 05 19:22:13 crc kubenswrapper[4828]: I1205 19:22:13.099041 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 05 19:22:13 crc kubenswrapper[4828]: I1205 19:22:13.143307 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-drq5x"] Dec 05 19:22:13 crc kubenswrapper[4828]: I1205 19:22:13.156994 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-drq5x"] Dec 05 19:22:13 crc kubenswrapper[4828]: I1205 19:22:13.178851 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6glxp"] Dec 05 19:22:13 crc kubenswrapper[4828]: I1205 19:22:13.187609 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6glxp"] Dec 05 19:22:13 crc kubenswrapper[4828]: I1205 19:22:13.251526 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s6jdb"] Dec 05 19:22:13 crc kubenswrapper[4828]: I1205 19:22:13.414108 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 05 19:22:13 crc kubenswrapper[4828]: W1205 19:22:13.431675 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93282807_6c59_42db_9235_8b2097a8f7a9.slice/crio-21cc15be54882d01fca891f33eff54faf8871098648286fb23f9981d680d5d94 WatchSource:0}: Error finding container 21cc15be54882d01fca891f33eff54faf8871098648286fb23f9981d680d5d94: Status 404 returned error can't find the container with id 21cc15be54882d01fca891f33eff54faf8871098648286fb23f9981d680d5d94 Dec 05 19:22:13 crc kubenswrapper[4828]: I1205 19:22:13.602868 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 05 19:22:13 crc kubenswrapper[4828]: I1205 19:22:13.686079 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 05 19:22:13 crc kubenswrapper[4828]: W1205 19:22:13.700617 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ac00d92_7825_4462_ab12_8d2059085d24.slice/crio-75ae891e9af89b29b0e14d81e2c3d7df6d8a525628a46293e861e53f9b800a78 WatchSource:0}: Error finding container 75ae891e9af89b29b0e14d81e2c3d7df6d8a525628a46293e861e53f9b800a78: Status 404 returned error can't find the container with id 75ae891e9af89b29b0e14d81e2c3d7df6d8a525628a46293e861e53f9b800a78 Dec 05 19:22:13 crc kubenswrapper[4828]: W1205 19:22:13.703332 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31b675bd_ec74_4876_91a0_95e4180e8cab.slice/crio-b97817388336db06a3428d95d894a98948520375a79387de2a326468137b03ce WatchSource:0}: Error finding container b97817388336db06a3428d95d894a98948520375a79387de2a326468137b03ce: Status 404 returned error can't find the container with id b97817388336db06a3428d95d894a98948520375a79387de2a326468137b03ce Dec 05 19:22:14 crc kubenswrapper[4828]: I1205 19:22:14.049469 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a2debacb-a691-43ee-aa79-670bbec2a98a","Type":"ContainerStarted","Data":"335030b552292f23c015866d9e77c2515419016d867efdf1a44de61ab5f674e1"} Dec 05 19:22:14 crc kubenswrapper[4828]: I1205 19:22:14.050861 4828 generic.go:334] "Generic (PLEG): container finished" podID="f33761e0-29f6-42c7-9c1d-cba24654a37a" containerID="66a9d63f1a422c9f2f7f288aa000d1412b17c8ad0429b541cde11d6358403eed" exitCode=0 Dec 05 19:22:14 crc kubenswrapper[4828]: I1205 19:22:14.050932 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7j6xm" event={"ID":"f33761e0-29f6-42c7-9c1d-cba24654a37a","Type":"ContainerDied","Data":"66a9d63f1a422c9f2f7f288aa000d1412b17c8ad0429b541cde11d6358403eed"} Dec 05 19:22:14 crc kubenswrapper[4828]: I1205 19:22:14.052982 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"93282807-6c59-42db-9235-8b2097a8f7a9","Type":"ContainerStarted","Data":"21cc15be54882d01fca891f33eff54faf8871098648286fb23f9981d680d5d94"} Dec 05 19:22:14 crc kubenswrapper[4828]: I1205 19:22:14.054210 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s6jdb" event={"ID":"f88a4161-1271-4374-9740-eaea879d6561","Type":"ContainerStarted","Data":"e350ef69d8ff1a889c4b97702220c0c3a76207f4187a2a8f6efe36e5a5915ae6"} Dec 05 19:22:14 crc kubenswrapper[4828]: I1205 19:22:14.056729 4828 generic.go:334] "Generic (PLEG): container finished" podID="6fb6a146-a614-4339-9e07-8892e82fed36" containerID="82a8e6fb46e85670cb8a93876f00c773f950d96831dbeff282acff94c12f0239" exitCode=0 Dec 05 19:22:14 crc kubenswrapper[4828]: I1205 19:22:14.056800 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-sqmk4" event={"ID":"6fb6a146-a614-4339-9e07-8892e82fed36","Type":"ContainerDied","Data":"82a8e6fb46e85670cb8a93876f00c773f950d96831dbeff282acff94c12f0239"} Dec 05 19:22:14 crc kubenswrapper[4828]: I1205 19:22:14.057726 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7ac00d92-7825-4462-ab12-8d2059085d24","Type":"ContainerStarted","Data":"75ae891e9af89b29b0e14d81e2c3d7df6d8a525628a46293e861e53f9b800a78"} Dec 05 19:22:14 crc kubenswrapper[4828]: I1205 19:22:14.068885 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"31b675bd-ec74-4876-91a0-95e4180e8cab","Type":"ContainerStarted","Data":"b97817388336db06a3428d95d894a98948520375a79387de2a326468137b03ce"} Dec 05 19:22:14 crc kubenswrapper[4828]: I1205 19:22:14.236129 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-l467t"] Dec 05 19:22:14 crc kubenswrapper[4828]: I1205 19:22:14.457229 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d51fed5-9f28-495a-8a60-223fa1409caf" path="/var/lib/kubelet/pods/3d51fed5-9f28-495a-8a60-223fa1409caf/volumes" Dec 05 19:22:14 crc kubenswrapper[4828]: I1205 19:22:14.457835 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="621bc79a-9f33-4dbe-94e4-74cc81382cbb" path="/var/lib/kubelet/pods/621bc79a-9f33-4dbe-94e4-74cc81382cbb/volumes" Dec 05 19:22:14 crc kubenswrapper[4828]: W1205 19:22:14.490512 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b912679_3c5e_4511_8769_8b8b4923d9fd.slice/crio-7f9b37f1d8a6da52458eb3435ca110bb6352e492c26d52f4825fbf3a3a0d8bd3 WatchSource:0}: Error finding container 7f9b37f1d8a6da52458eb3435ca110bb6352e492c26d52f4825fbf3a3a0d8bd3: Status 404 returned error can't find the container with id 7f9b37f1d8a6da52458eb3435ca110bb6352e492c26d52f4825fbf3a3a0d8bd3 Dec 05 19:22:15 crc kubenswrapper[4828]: I1205 19:22:15.093776 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e21a851c-5179-4365-8e72-5dea16be90cc","Type":"ContainerStarted","Data":"c399f137e8476671322f5c3df219c0542ea3413c956c623f62ca825d6f30ae5d"} Dec 05 19:22:15 crc kubenswrapper[4828]: I1205 19:22:15.097188 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"50db8d67-b1c6-4165-a526-8149092660ed","Type":"ContainerStarted","Data":"e065a0fe74f9c385bcc5b2d72a845ca0945938e9796cd9e236af91876f1347fa"} Dec 05 19:22:15 crc kubenswrapper[4828]: I1205 19:22:15.101539 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l467t" event={"ID":"3b912679-3c5e-4511-8769-8b8b4923d9fd","Type":"ContainerStarted","Data":"7f9b37f1d8a6da52458eb3435ca110bb6352e492c26d52f4825fbf3a3a0d8bd3"} Dec 05 19:22:22 crc kubenswrapper[4828]: I1205 19:22:22.361268 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7j6xm" event={"ID":"f33761e0-29f6-42c7-9c1d-cba24654a37a","Type":"ContainerStarted","Data":"24962eb800985efafff1a4522e5d969c41a17c4002c0054e7013c011a9596c5e"} Dec 05 19:22:22 crc kubenswrapper[4828]: I1205 19:22:22.362200 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-7j6xm" Dec 05 19:22:22 crc kubenswrapper[4828]: I1205 19:22:22.364163 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s6jdb" event={"ID":"f88a4161-1271-4374-9740-eaea879d6561","Type":"ContainerStarted","Data":"75b7f4069c7ff41c3f2b8c512a8b3a22e64e7365423f98a0ba086679b3ec3c6a"} Dec 05 19:22:22 crc kubenswrapper[4828]: I1205 19:22:22.364227 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-s6jdb" Dec 05 19:22:22 crc kubenswrapper[4828]: I1205 19:22:22.366258 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-sqmk4" event={"ID":"6fb6a146-a614-4339-9e07-8892e82fed36","Type":"ContainerStarted","Data":"99db7c7f7f963389068b38d85c6fb45ce161a06deee4cf10ec9a2b779dc9e744"} Dec 05 19:22:22 crc kubenswrapper[4828]: I1205 19:22:22.366369 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-sqmk4" Dec 05 19:22:22 crc kubenswrapper[4828]: I1205 19:22:22.371599 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7ac00d92-7825-4462-ab12-8d2059085d24","Type":"ContainerStarted","Data":"f6b844557555ba29c6d0f132ab2237e40834fe689a55b314d51eaaf3adc09d70"} Dec 05 19:22:22 crc kubenswrapper[4828]: I1205 19:22:22.374737 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"55a74937-57cf-442b-a1df-16a9df3b7948","Type":"ContainerStarted","Data":"059661c974b913a39686e79cd07b42429f83f1824e188d2290e8c5fd790a6dc4"} Dec 05 19:22:22 crc kubenswrapper[4828]: I1205 19:22:22.374857 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 05 19:22:22 crc kubenswrapper[4828]: I1205 19:22:22.381756 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"93282807-6c59-42db-9235-8b2097a8f7a9","Type":"ContainerStarted","Data":"314f762d4010bc6b286af2377ec58bd72f0a78bf01266c4168d8476e411588be"} Dec 05 19:22:22 crc kubenswrapper[4828]: I1205 19:22:22.381952 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 05 19:22:22 crc kubenswrapper[4828]: I1205 19:22:22.384007 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-7j6xm" podStartSLOduration=11.594785692 podStartE2EDuration="31.383990743s" podCreationTimestamp="2025-12-05 19:21:51 +0000 UTC" firstStartedPulling="2025-12-05 19:21:52.73575778 +0000 UTC m=+1090.630980086" lastFinishedPulling="2025-12-05 19:22:12.524962831 +0000 UTC m=+1110.420185137" observedRunningTime="2025-12-05 19:22:22.379107049 +0000 UTC m=+1120.274329375" watchObservedRunningTime="2025-12-05 19:22:22.383990743 +0000 UTC m=+1120.279213039" Dec 05 19:22:22 crc kubenswrapper[4828]: I1205 19:22:22.384937 4828 generic.go:334] "Generic (PLEG): container finished" podID="3b912679-3c5e-4511-8769-8b8b4923d9fd" containerID="2835725dc57f5a6488886ab56853daf5b55a0df0877415dd1fa6a661b9dddcf0" exitCode=0 Dec 05 19:22:22 crc kubenswrapper[4828]: I1205 19:22:22.385083 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l467t" event={"ID":"3b912679-3c5e-4511-8769-8b8b4923d9fd","Type":"ContainerDied","Data":"2835725dc57f5a6488886ab56853daf5b55a0df0877415dd1fa6a661b9dddcf0"} Dec 05 19:22:22 crc kubenswrapper[4828]: I1205 19:22:22.386967 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7064b569-c206-4ed9-8f28-3e5a7e92bf79","Type":"ContainerStarted","Data":"62893ee0853694bc8178e42354d1e67672a25626054751df709341c768fc5dfe"} Dec 05 19:22:22 crc kubenswrapper[4828]: I1205 19:22:22.391848 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"31b675bd-ec74-4876-91a0-95e4180e8cab","Type":"ContainerStarted","Data":"fda26fb985532177ef40a68c1dafa0d486f9a28356a888692cf21ff7340d310c"} Dec 05 19:22:22 crc kubenswrapper[4828]: I1205 19:22:22.394409 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a2debacb-a691-43ee-aa79-670bbec2a98a","Type":"ContainerStarted","Data":"d7ce793cf4ec010d94eb81b959f552bcecf51df474501958676695362f02edb9"} Dec 05 19:22:22 crc kubenswrapper[4828]: I1205 19:22:22.405421 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=16.262883696 podStartE2EDuration="24.40540285s" podCreationTimestamp="2025-12-05 19:21:58 +0000 UTC" firstStartedPulling="2025-12-05 19:22:12.929199019 +0000 UTC m=+1110.824421325" lastFinishedPulling="2025-12-05 19:22:21.071718133 +0000 UTC m=+1118.966940479" observedRunningTime="2025-12-05 19:22:22.399106747 +0000 UTC m=+1120.294329043" watchObservedRunningTime="2025-12-05 19:22:22.40540285 +0000 UTC m=+1120.300625156" Dec 05 19:22:22 crc kubenswrapper[4828]: I1205 19:22:22.427582 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-s6jdb" podStartSLOduration=12.286619615 podStartE2EDuration="20.427564046s" podCreationTimestamp="2025-12-05 19:22:02 +0000 UTC" firstStartedPulling="2025-12-05 19:22:13.260085486 +0000 UTC m=+1111.155307792" lastFinishedPulling="2025-12-05 19:22:21.401029917 +0000 UTC m=+1119.296252223" observedRunningTime="2025-12-05 19:22:22.419477536 +0000 UTC m=+1120.314699832" watchObservedRunningTime="2025-12-05 19:22:22.427564046 +0000 UTC m=+1120.322786352" Dec 05 19:22:22 crc kubenswrapper[4828]: I1205 19:22:22.471381 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-sqmk4" podStartSLOduration=11.836135796 podStartE2EDuration="31.471354347s" podCreationTimestamp="2025-12-05 19:21:51 +0000 UTC" firstStartedPulling="2025-12-05 19:21:53.248730297 +0000 UTC m=+1091.143952603" lastFinishedPulling="2025-12-05 19:22:12.883948848 +0000 UTC m=+1110.779171154" observedRunningTime="2025-12-05 19:22:22.442324581 +0000 UTC m=+1120.337546897" watchObservedRunningTime="2025-12-05 19:22:22.471354347 +0000 UTC m=+1120.366576653" Dec 05 19:22:22 crc kubenswrapper[4828]: I1205 19:22:22.542894 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=19.090305258 podStartE2EDuration="26.542871787s" podCreationTimestamp="2025-12-05 19:21:56 +0000 UTC" firstStartedPulling="2025-12-05 19:22:13.438286078 +0000 UTC m=+1111.333508384" lastFinishedPulling="2025-12-05 19:22:20.890852607 +0000 UTC m=+1118.786074913" observedRunningTime="2025-12-05 19:22:22.533337785 +0000 UTC m=+1120.428560091" watchObservedRunningTime="2025-12-05 19:22:22.542871787 +0000 UTC m=+1120.438094193" Dec 05 19:22:25 crc kubenswrapper[4828]: I1205 19:22:25.430022 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l467t" event={"ID":"3b912679-3c5e-4511-8769-8b8b4923d9fd","Type":"ContainerStarted","Data":"786866b2ff1b92467031be6a1117d4d8c117144910ebdc4bf9083fe1e5eef52a"} Dec 05 19:22:26 crc kubenswrapper[4828]: I1205 19:22:26.485863 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-l467t" Dec 05 19:22:26 crc kubenswrapper[4828]: I1205 19:22:26.486106 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l467t" event={"ID":"3b912679-3c5e-4511-8769-8b8b4923d9fd","Type":"ContainerStarted","Data":"de88f40719a05e25705ffbd0026853a96dcb668c5efe0b6286ff4a860fcca23b"} Dec 05 19:22:26 crc kubenswrapper[4828]: I1205 19:22:26.486123 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-l467t" Dec 05 19:22:26 crc kubenswrapper[4828]: I1205 19:22:26.566231 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-l467t" podStartSLOduration=17.680306735 podStartE2EDuration="24.566207055s" podCreationTimestamp="2025-12-05 19:22:02 +0000 UTC" firstStartedPulling="2025-12-05 19:22:14.49444884 +0000 UTC m=+1112.389671146" lastFinishedPulling="2025-12-05 19:22:21.38034916 +0000 UTC m=+1119.275571466" observedRunningTime="2025-12-05 19:22:26.558895695 +0000 UTC m=+1124.454118001" watchObservedRunningTime="2025-12-05 19:22:26.566207055 +0000 UTC m=+1124.461429361" Dec 05 19:22:26 crc kubenswrapper[4828]: I1205 19:22:26.698736 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 05 19:22:26 crc kubenswrapper[4828]: I1205 19:22:26.870500 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-7j6xm" Dec 05 19:22:27 crc kubenswrapper[4828]: I1205 19:22:27.354039 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-sqmk4" Dec 05 19:22:27 crc kubenswrapper[4828]: I1205 19:22:27.413028 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7j6xm"] Dec 05 19:22:27 crc kubenswrapper[4828]: I1205 19:22:27.474153 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-7j6xm" podUID="f33761e0-29f6-42c7-9c1d-cba24654a37a" containerName="dnsmasq-dns" containerID="cri-o://24962eb800985efafff1a4522e5d969c41a17c4002c0054e7013c011a9596c5e" gracePeriod=10 Dec 05 19:22:27 crc kubenswrapper[4828]: E1205 19:22:27.633911 4828 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf33761e0_29f6_42c7_9c1d_cba24654a37a.slice/crio-24962eb800985efafff1a4522e5d969c41a17c4002c0054e7013c011a9596c5e.scope\": RecentStats: unable to find data in memory cache]" Dec 05 19:22:28 crc kubenswrapper[4828]: I1205 19:22:28.205946 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7j6xm" Dec 05 19:22:28 crc kubenswrapper[4828]: I1205 19:22:28.302462 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f33761e0-29f6-42c7-9c1d-cba24654a37a-dns-svc\") pod \"f33761e0-29f6-42c7-9c1d-cba24654a37a\" (UID: \"f33761e0-29f6-42c7-9c1d-cba24654a37a\") " Dec 05 19:22:28 crc kubenswrapper[4828]: I1205 19:22:28.302506 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f33761e0-29f6-42c7-9c1d-cba24654a37a-config\") pod \"f33761e0-29f6-42c7-9c1d-cba24654a37a\" (UID: \"f33761e0-29f6-42c7-9c1d-cba24654a37a\") " Dec 05 19:22:28 crc kubenswrapper[4828]: I1205 19:22:28.302530 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwpjm\" (UniqueName: \"kubernetes.io/projected/f33761e0-29f6-42c7-9c1d-cba24654a37a-kube-api-access-pwpjm\") pod \"f33761e0-29f6-42c7-9c1d-cba24654a37a\" (UID: \"f33761e0-29f6-42c7-9c1d-cba24654a37a\") " Dec 05 19:22:28 crc kubenswrapper[4828]: I1205 19:22:28.308548 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f33761e0-29f6-42c7-9c1d-cba24654a37a-kube-api-access-pwpjm" (OuterVolumeSpecName: "kube-api-access-pwpjm") pod "f33761e0-29f6-42c7-9c1d-cba24654a37a" (UID: "f33761e0-29f6-42c7-9c1d-cba24654a37a"). InnerVolumeSpecName "kube-api-access-pwpjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:22:28 crc kubenswrapper[4828]: I1205 19:22:28.342785 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f33761e0-29f6-42c7-9c1d-cba24654a37a-config" (OuterVolumeSpecName: "config") pod "f33761e0-29f6-42c7-9c1d-cba24654a37a" (UID: "f33761e0-29f6-42c7-9c1d-cba24654a37a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:22:28 crc kubenswrapper[4828]: I1205 19:22:28.345442 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f33761e0-29f6-42c7-9c1d-cba24654a37a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f33761e0-29f6-42c7-9c1d-cba24654a37a" (UID: "f33761e0-29f6-42c7-9c1d-cba24654a37a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:22:28 crc kubenswrapper[4828]: I1205 19:22:28.403913 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwpjm\" (UniqueName: \"kubernetes.io/projected/f33761e0-29f6-42c7-9c1d-cba24654a37a-kube-api-access-pwpjm\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:28 crc kubenswrapper[4828]: I1205 19:22:28.403943 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f33761e0-29f6-42c7-9c1d-cba24654a37a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:28 crc kubenswrapper[4828]: I1205 19:22:28.403952 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f33761e0-29f6-42c7-9c1d-cba24654a37a-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:28 crc kubenswrapper[4828]: I1205 19:22:28.499324 4828 generic.go:334] "Generic (PLEG): container finished" podID="a2debacb-a691-43ee-aa79-670bbec2a98a" containerID="d7ce793cf4ec010d94eb81b959f552bcecf51df474501958676695362f02edb9" exitCode=0 Dec 05 19:22:28 crc kubenswrapper[4828]: I1205 19:22:28.499404 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a2debacb-a691-43ee-aa79-670bbec2a98a","Type":"ContainerDied","Data":"d7ce793cf4ec010d94eb81b959f552bcecf51df474501958676695362f02edb9"} Dec 05 19:22:28 crc kubenswrapper[4828]: I1205 19:22:28.503398 4828 generic.go:334] "Generic (PLEG): container finished" podID="f33761e0-29f6-42c7-9c1d-cba24654a37a" containerID="24962eb800985efafff1a4522e5d969c41a17c4002c0054e7013c011a9596c5e" exitCode=0 Dec 05 19:22:28 crc kubenswrapper[4828]: I1205 19:22:28.503449 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7j6xm" event={"ID":"f33761e0-29f6-42c7-9c1d-cba24654a37a","Type":"ContainerDied","Data":"24962eb800985efafff1a4522e5d969c41a17c4002c0054e7013c011a9596c5e"} Dec 05 19:22:28 crc kubenswrapper[4828]: I1205 19:22:28.503479 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7j6xm" event={"ID":"f33761e0-29f6-42c7-9c1d-cba24654a37a","Type":"ContainerDied","Data":"7abcdf18af1765934dc9ead9941e956740ca7d0444de4d57951559a1aa847018"} Dec 05 19:22:28 crc kubenswrapper[4828]: I1205 19:22:28.503496 4828 scope.go:117] "RemoveContainer" containerID="24962eb800985efafff1a4522e5d969c41a17c4002c0054e7013c011a9596c5e" Dec 05 19:22:28 crc kubenswrapper[4828]: I1205 19:22:28.503566 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7j6xm" Dec 05 19:22:28 crc kubenswrapper[4828]: I1205 19:22:28.554782 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7j6xm"] Dec 05 19:22:28 crc kubenswrapper[4828]: I1205 19:22:28.560054 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7j6xm"] Dec 05 19:22:28 crc kubenswrapper[4828]: I1205 19:22:28.809850 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-zcskw"] Dec 05 19:22:28 crc kubenswrapper[4828]: E1205 19:22:28.810436 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f33761e0-29f6-42c7-9c1d-cba24654a37a" containerName="init" Dec 05 19:22:28 crc kubenswrapper[4828]: I1205 19:22:28.810448 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f33761e0-29f6-42c7-9c1d-cba24654a37a" containerName="init" Dec 05 19:22:28 crc kubenswrapper[4828]: E1205 19:22:28.810472 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f33761e0-29f6-42c7-9c1d-cba24654a37a" containerName="dnsmasq-dns" Dec 05 19:22:28 crc kubenswrapper[4828]: I1205 19:22:28.810479 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f33761e0-29f6-42c7-9c1d-cba24654a37a" containerName="dnsmasq-dns" Dec 05 19:22:28 crc kubenswrapper[4828]: I1205 19:22:28.810616 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="f33761e0-29f6-42c7-9c1d-cba24654a37a" containerName="dnsmasq-dns" Dec 05 19:22:28 crc kubenswrapper[4828]: I1205 19:22:28.811488 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-zcskw" Dec 05 19:22:28 crc kubenswrapper[4828]: I1205 19:22:28.829523 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-zcskw"] Dec 05 19:22:28 crc kubenswrapper[4828]: I1205 19:22:28.911892 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7fcr\" (UniqueName: \"kubernetes.io/projected/24ef607c-0e94-4e51-9fbe-54745774cd5e-kube-api-access-t7fcr\") pod \"dnsmasq-dns-7cb5889db5-zcskw\" (UID: \"24ef607c-0e94-4e51-9fbe-54745774cd5e\") " pod="openstack/dnsmasq-dns-7cb5889db5-zcskw" Dec 05 19:22:28 crc kubenswrapper[4828]: I1205 19:22:28.912134 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ef607c-0e94-4e51-9fbe-54745774cd5e-config\") pod \"dnsmasq-dns-7cb5889db5-zcskw\" (UID: \"24ef607c-0e94-4e51-9fbe-54745774cd5e\") " pod="openstack/dnsmasq-dns-7cb5889db5-zcskw" Dec 05 19:22:28 crc kubenswrapper[4828]: I1205 19:22:28.912227 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24ef607c-0e94-4e51-9fbe-54745774cd5e-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-zcskw\" (UID: \"24ef607c-0e94-4e51-9fbe-54745774cd5e\") " pod="openstack/dnsmasq-dns-7cb5889db5-zcskw" Dec 05 19:22:29 crc kubenswrapper[4828]: I1205 19:22:29.014175 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ef607c-0e94-4e51-9fbe-54745774cd5e-config\") pod \"dnsmasq-dns-7cb5889db5-zcskw\" (UID: \"24ef607c-0e94-4e51-9fbe-54745774cd5e\") " pod="openstack/dnsmasq-dns-7cb5889db5-zcskw" Dec 05 19:22:29 crc kubenswrapper[4828]: I1205 19:22:29.014235 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24ef607c-0e94-4e51-9fbe-54745774cd5e-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-zcskw\" (UID: \"24ef607c-0e94-4e51-9fbe-54745774cd5e\") " pod="openstack/dnsmasq-dns-7cb5889db5-zcskw" Dec 05 19:22:29 crc kubenswrapper[4828]: I1205 19:22:29.014283 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7fcr\" (UniqueName: \"kubernetes.io/projected/24ef607c-0e94-4e51-9fbe-54745774cd5e-kube-api-access-t7fcr\") pod \"dnsmasq-dns-7cb5889db5-zcskw\" (UID: \"24ef607c-0e94-4e51-9fbe-54745774cd5e\") " pod="openstack/dnsmasq-dns-7cb5889db5-zcskw" Dec 05 19:22:29 crc kubenswrapper[4828]: I1205 19:22:29.015271 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24ef607c-0e94-4e51-9fbe-54745774cd5e-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-zcskw\" (UID: \"24ef607c-0e94-4e51-9fbe-54745774cd5e\") " pod="openstack/dnsmasq-dns-7cb5889db5-zcskw" Dec 05 19:22:29 crc kubenswrapper[4828]: I1205 19:22:29.015283 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ef607c-0e94-4e51-9fbe-54745774cd5e-config\") pod \"dnsmasq-dns-7cb5889db5-zcskw\" (UID: \"24ef607c-0e94-4e51-9fbe-54745774cd5e\") " pod="openstack/dnsmasq-dns-7cb5889db5-zcskw" Dec 05 19:22:29 crc kubenswrapper[4828]: I1205 19:22:29.033187 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7fcr\" (UniqueName: \"kubernetes.io/projected/24ef607c-0e94-4e51-9fbe-54745774cd5e-kube-api-access-t7fcr\") pod \"dnsmasq-dns-7cb5889db5-zcskw\" (UID: \"24ef607c-0e94-4e51-9fbe-54745774cd5e\") " pod="openstack/dnsmasq-dns-7cb5889db5-zcskw" Dec 05 19:22:29 crc kubenswrapper[4828]: I1205 19:22:29.073171 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 05 19:22:29 crc kubenswrapper[4828]: I1205 19:22:29.137183 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-zcskw" Dec 05 19:22:29 crc kubenswrapper[4828]: I1205 19:22:29.510965 4828 generic.go:334] "Generic (PLEG): container finished" podID="7064b569-c206-4ed9-8f28-3e5a7e92bf79" containerID="62893ee0853694bc8178e42354d1e67672a25626054751df709341c768fc5dfe" exitCode=0 Dec 05 19:22:29 crc kubenswrapper[4828]: I1205 19:22:29.510990 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7064b569-c206-4ed9-8f28-3e5a7e92bf79","Type":"ContainerDied","Data":"62893ee0853694bc8178e42354d1e67672a25626054751df709341c768fc5dfe"} Dec 05 19:22:29 crc kubenswrapper[4828]: I1205 19:22:29.948157 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 05 19:22:29 crc kubenswrapper[4828]: I1205 19:22:29.952717 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 05 19:22:29 crc kubenswrapper[4828]: I1205 19:22:29.954631 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 05 19:22:29 crc kubenswrapper[4828]: I1205 19:22:29.955999 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 05 19:22:29 crc kubenswrapper[4828]: I1205 19:22:29.956114 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-9k4b4" Dec 05 19:22:29 crc kubenswrapper[4828]: I1205 19:22:29.956176 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 05 19:22:29 crc kubenswrapper[4828]: I1205 19:22:29.976060 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.132349 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/821554f9-a51e-4a16-a053-b8bc18d93a9e-lock\") pod \"swift-storage-0\" (UID: \"821554f9-a51e-4a16-a053-b8bc18d93a9e\") " pod="openstack/swift-storage-0" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.132456 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2h5d\" (UniqueName: \"kubernetes.io/projected/821554f9-a51e-4a16-a053-b8bc18d93a9e-kube-api-access-c2h5d\") pod \"swift-storage-0\" (UID: \"821554f9-a51e-4a16-a053-b8bc18d93a9e\") " pod="openstack/swift-storage-0" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.132600 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"821554f9-a51e-4a16-a053-b8bc18d93a9e\") " pod="openstack/swift-storage-0" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.133354 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/821554f9-a51e-4a16-a053-b8bc18d93a9e-etc-swift\") pod \"swift-storage-0\" (UID: \"821554f9-a51e-4a16-a053-b8bc18d93a9e\") " pod="openstack/swift-storage-0" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.133504 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/821554f9-a51e-4a16-a053-b8bc18d93a9e-cache\") pod \"swift-storage-0\" (UID: \"821554f9-a51e-4a16-a053-b8bc18d93a9e\") " pod="openstack/swift-storage-0" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.235427 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/821554f9-a51e-4a16-a053-b8bc18d93a9e-etc-swift\") pod \"swift-storage-0\" (UID: \"821554f9-a51e-4a16-a053-b8bc18d93a9e\") " pod="openstack/swift-storage-0" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.235864 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/821554f9-a51e-4a16-a053-b8bc18d93a9e-cache\") pod \"swift-storage-0\" (UID: \"821554f9-a51e-4a16-a053-b8bc18d93a9e\") " pod="openstack/swift-storage-0" Dec 05 19:22:30 crc kubenswrapper[4828]: E1205 19:22:30.235716 4828 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 05 19:22:30 crc kubenswrapper[4828]: E1205 19:22:30.235940 4828 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.235952 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/821554f9-a51e-4a16-a053-b8bc18d93a9e-lock\") pod \"swift-storage-0\" (UID: \"821554f9-a51e-4a16-a053-b8bc18d93a9e\") " pod="openstack/swift-storage-0" Dec 05 19:22:30 crc kubenswrapper[4828]: E1205 19:22:30.236016 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/821554f9-a51e-4a16-a053-b8bc18d93a9e-etc-swift podName:821554f9-a51e-4a16-a053-b8bc18d93a9e nodeName:}" failed. No retries permitted until 2025-12-05 19:22:30.735983896 +0000 UTC m=+1128.631206292 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/821554f9-a51e-4a16-a053-b8bc18d93a9e-etc-swift") pod "swift-storage-0" (UID: "821554f9-a51e-4a16-a053-b8bc18d93a9e") : configmap "swift-ring-files" not found Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.236050 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2h5d\" (UniqueName: \"kubernetes.io/projected/821554f9-a51e-4a16-a053-b8bc18d93a9e-kube-api-access-c2h5d\") pod \"swift-storage-0\" (UID: \"821554f9-a51e-4a16-a053-b8bc18d93a9e\") " pod="openstack/swift-storage-0" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.236129 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"821554f9-a51e-4a16-a053-b8bc18d93a9e\") " pod="openstack/swift-storage-0" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.236484 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/821554f9-a51e-4a16-a053-b8bc18d93a9e-lock\") pod \"swift-storage-0\" (UID: \"821554f9-a51e-4a16-a053-b8bc18d93a9e\") " pod="openstack/swift-storage-0" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.236667 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"821554f9-a51e-4a16-a053-b8bc18d93a9e\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/swift-storage-0" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.237488 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/821554f9-a51e-4a16-a053-b8bc18d93a9e-cache\") pod \"swift-storage-0\" (UID: \"821554f9-a51e-4a16-a053-b8bc18d93a9e\") " pod="openstack/swift-storage-0" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.264653 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"821554f9-a51e-4a16-a053-b8bc18d93a9e\") " pod="openstack/swift-storage-0" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.265110 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2h5d\" (UniqueName: \"kubernetes.io/projected/821554f9-a51e-4a16-a053-b8bc18d93a9e-kube-api-access-c2h5d\") pod \"swift-storage-0\" (UID: \"821554f9-a51e-4a16-a053-b8bc18d93a9e\") " pod="openstack/swift-storage-0" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.457393 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f33761e0-29f6-42c7-9c1d-cba24654a37a" path="/var/lib/kubelet/pods/f33761e0-29f6-42c7-9c1d-cba24654a37a/volumes" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.458296 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-vs6cm"] Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.459470 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vs6cm" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.462201 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.462452 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.492411 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.527893 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-vs6cm"] Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.642705 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/784df3ad-b111-476d-ad5c-e10ee3e04b2f-etc-swift\") pod \"swift-ring-rebalance-vs6cm\" (UID: \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\") " pod="openstack/swift-ring-rebalance-vs6cm" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.642773 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784df3ad-b111-476d-ad5c-e10ee3e04b2f-combined-ca-bundle\") pod \"swift-ring-rebalance-vs6cm\" (UID: \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\") " pod="openstack/swift-ring-rebalance-vs6cm" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.642913 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/784df3ad-b111-476d-ad5c-e10ee3e04b2f-swiftconf\") pod \"swift-ring-rebalance-vs6cm\" (UID: \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\") " pod="openstack/swift-ring-rebalance-vs6cm" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.642945 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lljfz\" (UniqueName: \"kubernetes.io/projected/784df3ad-b111-476d-ad5c-e10ee3e04b2f-kube-api-access-lljfz\") pod \"swift-ring-rebalance-vs6cm\" (UID: \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\") " pod="openstack/swift-ring-rebalance-vs6cm" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.642984 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/784df3ad-b111-476d-ad5c-e10ee3e04b2f-dispersionconf\") pod \"swift-ring-rebalance-vs6cm\" (UID: \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\") " pod="openstack/swift-ring-rebalance-vs6cm" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.643012 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/784df3ad-b111-476d-ad5c-e10ee3e04b2f-scripts\") pod \"swift-ring-rebalance-vs6cm\" (UID: \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\") " pod="openstack/swift-ring-rebalance-vs6cm" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.643031 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/784df3ad-b111-476d-ad5c-e10ee3e04b2f-ring-data-devices\") pod \"swift-ring-rebalance-vs6cm\" (UID: \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\") " pod="openstack/swift-ring-rebalance-vs6cm" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.744856 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/784df3ad-b111-476d-ad5c-e10ee3e04b2f-swiftconf\") pod \"swift-ring-rebalance-vs6cm\" (UID: \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\") " pod="openstack/swift-ring-rebalance-vs6cm" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.744975 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lljfz\" (UniqueName: \"kubernetes.io/projected/784df3ad-b111-476d-ad5c-e10ee3e04b2f-kube-api-access-lljfz\") pod \"swift-ring-rebalance-vs6cm\" (UID: \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\") " pod="openstack/swift-ring-rebalance-vs6cm" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.745075 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/784df3ad-b111-476d-ad5c-e10ee3e04b2f-dispersionconf\") pod \"swift-ring-rebalance-vs6cm\" (UID: \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\") " pod="openstack/swift-ring-rebalance-vs6cm" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.745157 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/784df3ad-b111-476d-ad5c-e10ee3e04b2f-scripts\") pod \"swift-ring-rebalance-vs6cm\" (UID: \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\") " pod="openstack/swift-ring-rebalance-vs6cm" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.745212 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/784df3ad-b111-476d-ad5c-e10ee3e04b2f-ring-data-devices\") pod \"swift-ring-rebalance-vs6cm\" (UID: \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\") " pod="openstack/swift-ring-rebalance-vs6cm" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.745318 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/821554f9-a51e-4a16-a053-b8bc18d93a9e-etc-swift\") pod \"swift-storage-0\" (UID: \"821554f9-a51e-4a16-a053-b8bc18d93a9e\") " pod="openstack/swift-storage-0" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.745403 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/784df3ad-b111-476d-ad5c-e10ee3e04b2f-etc-swift\") pod \"swift-ring-rebalance-vs6cm\" (UID: \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\") " pod="openstack/swift-ring-rebalance-vs6cm" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.745493 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784df3ad-b111-476d-ad5c-e10ee3e04b2f-combined-ca-bundle\") pod \"swift-ring-rebalance-vs6cm\" (UID: \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\") " pod="openstack/swift-ring-rebalance-vs6cm" Dec 05 19:22:30 crc kubenswrapper[4828]: E1205 19:22:30.746556 4828 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 05 19:22:30 crc kubenswrapper[4828]: E1205 19:22:30.746610 4828 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.746688 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/784df3ad-b111-476d-ad5c-e10ee3e04b2f-ring-data-devices\") pod \"swift-ring-rebalance-vs6cm\" (UID: \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\") " pod="openstack/swift-ring-rebalance-vs6cm" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.747361 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/784df3ad-b111-476d-ad5c-e10ee3e04b2f-etc-swift\") pod \"swift-ring-rebalance-vs6cm\" (UID: \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\") " pod="openstack/swift-ring-rebalance-vs6cm" Dec 05 19:22:30 crc kubenswrapper[4828]: E1205 19:22:30.747412 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/821554f9-a51e-4a16-a053-b8bc18d93a9e-etc-swift podName:821554f9-a51e-4a16-a053-b8bc18d93a9e nodeName:}" failed. No retries permitted until 2025-12-05 19:22:31.746755813 +0000 UTC m=+1129.641978159 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/821554f9-a51e-4a16-a053-b8bc18d93a9e-etc-swift") pod "swift-storage-0" (UID: "821554f9-a51e-4a16-a053-b8bc18d93a9e") : configmap "swift-ring-files" not found Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.747894 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/784df3ad-b111-476d-ad5c-e10ee3e04b2f-scripts\") pod \"swift-ring-rebalance-vs6cm\" (UID: \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\") " pod="openstack/swift-ring-rebalance-vs6cm" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.750796 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/784df3ad-b111-476d-ad5c-e10ee3e04b2f-dispersionconf\") pod \"swift-ring-rebalance-vs6cm\" (UID: \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\") " pod="openstack/swift-ring-rebalance-vs6cm" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.761314 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/784df3ad-b111-476d-ad5c-e10ee3e04b2f-swiftconf\") pod \"swift-ring-rebalance-vs6cm\" (UID: \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\") " pod="openstack/swift-ring-rebalance-vs6cm" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.761581 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784df3ad-b111-476d-ad5c-e10ee3e04b2f-combined-ca-bundle\") pod \"swift-ring-rebalance-vs6cm\" (UID: \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\") " pod="openstack/swift-ring-rebalance-vs6cm" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.769469 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lljfz\" (UniqueName: \"kubernetes.io/projected/784df3ad-b111-476d-ad5c-e10ee3e04b2f-kube-api-access-lljfz\") pod \"swift-ring-rebalance-vs6cm\" (UID: \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\") " pod="openstack/swift-ring-rebalance-vs6cm" Dec 05 19:22:30 crc kubenswrapper[4828]: I1205 19:22:30.837560 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vs6cm" Dec 05 19:22:31 crc kubenswrapper[4828]: I1205 19:22:31.776451 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/821554f9-a51e-4a16-a053-b8bc18d93a9e-etc-swift\") pod \"swift-storage-0\" (UID: \"821554f9-a51e-4a16-a053-b8bc18d93a9e\") " pod="openstack/swift-storage-0" Dec 05 19:22:31 crc kubenswrapper[4828]: E1205 19:22:31.776688 4828 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 05 19:22:31 crc kubenswrapper[4828]: E1205 19:22:31.776706 4828 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 05 19:22:31 crc kubenswrapper[4828]: E1205 19:22:31.776766 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/821554f9-a51e-4a16-a053-b8bc18d93a9e-etc-swift podName:821554f9-a51e-4a16-a053-b8bc18d93a9e nodeName:}" failed. No retries permitted until 2025-12-05 19:22:33.776747797 +0000 UTC m=+1131.671970114 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/821554f9-a51e-4a16-a053-b8bc18d93a9e-etc-swift") pod "swift-storage-0" (UID: "821554f9-a51e-4a16-a053-b8bc18d93a9e") : configmap "swift-ring-files" not found Dec 05 19:22:33 crc kubenswrapper[4828]: I1205 19:22:33.807960 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/821554f9-a51e-4a16-a053-b8bc18d93a9e-etc-swift\") pod \"swift-storage-0\" (UID: \"821554f9-a51e-4a16-a053-b8bc18d93a9e\") " pod="openstack/swift-storage-0" Dec 05 19:22:33 crc kubenswrapper[4828]: E1205 19:22:33.808195 4828 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 05 19:22:33 crc kubenswrapper[4828]: E1205 19:22:33.808516 4828 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 05 19:22:33 crc kubenswrapper[4828]: E1205 19:22:33.808597 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/821554f9-a51e-4a16-a053-b8bc18d93a9e-etc-swift podName:821554f9-a51e-4a16-a053-b8bc18d93a9e nodeName:}" failed. No retries permitted until 2025-12-05 19:22:37.808572504 +0000 UTC m=+1135.703794820 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/821554f9-a51e-4a16-a053-b8bc18d93a9e-etc-swift") pod "swift-storage-0" (UID: "821554f9-a51e-4a16-a053-b8bc18d93a9e") : configmap "swift-ring-files" not found Dec 05 19:22:35 crc kubenswrapper[4828]: I1205 19:22:35.260011 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:22:35 crc kubenswrapper[4828]: I1205 19:22:35.260387 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:22:37 crc kubenswrapper[4828]: E1205 19:22:37.470604 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Dec 05 19:22:37 crc kubenswrapper[4828]: E1205 19:22:37.470800 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n58fh7h64chcbh5bbhdh5f9h75h5bch585h657h564h695h5d6hb6h5ddh595h5f5h576h5fbhc9hbhf8h99h594hb4h84hffh66bh656h559h9q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v2rv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(31b675bd-ec74-4876-91a0-95e4180e8cab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 19:22:37 crc kubenswrapper[4828]: E1205 19:22:37.472965 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="31b675bd-ec74-4876-91a0-95e4180e8cab" Dec 05 19:22:37 crc kubenswrapper[4828]: E1205 19:22:37.481438 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Dec 05 19:22:37 crc kubenswrapper[4828]: E1205 19:22:37.481588 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n67dh5cfhdh56chffh684hd9h76h5c4h5b7h8bh65bhfh644hc6h5d7h65fh5c5h5ffh9h648h56ch5f9hf5h57dh6h65fh596h5d7h584h5c7h66q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzvzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(7ac00d92-7825-4462-ab12-8d2059085d24): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 19:22:37 crc kubenswrapper[4828]: E1205 19:22:37.483328 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="7ac00d92-7825-4462-ab12-8d2059085d24" Dec 05 19:22:37 crc kubenswrapper[4828]: I1205 19:22:37.503193 4828 scope.go:117] "RemoveContainer" containerID="66a9d63f1a422c9f2f7f288aa000d1412b17c8ad0429b541cde11d6358403eed" Dec 05 19:22:37 crc kubenswrapper[4828]: I1205 19:22:37.695406 4828 scope.go:117] "RemoveContainer" containerID="24962eb800985efafff1a4522e5d969c41a17c4002c0054e7013c011a9596c5e" Dec 05 19:22:37 crc kubenswrapper[4828]: E1205 19:22:37.696338 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24962eb800985efafff1a4522e5d969c41a17c4002c0054e7013c011a9596c5e\": container with ID starting with 24962eb800985efafff1a4522e5d969c41a17c4002c0054e7013c011a9596c5e not found: ID does not exist" containerID="24962eb800985efafff1a4522e5d969c41a17c4002c0054e7013c011a9596c5e" Dec 05 19:22:37 crc kubenswrapper[4828]: I1205 19:22:37.696382 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24962eb800985efafff1a4522e5d969c41a17c4002c0054e7013c011a9596c5e"} err="failed to get container status \"24962eb800985efafff1a4522e5d969c41a17c4002c0054e7013c011a9596c5e\": rpc error: code = NotFound desc = could not find container \"24962eb800985efafff1a4522e5d969c41a17c4002c0054e7013c011a9596c5e\": container with ID starting with 24962eb800985efafff1a4522e5d969c41a17c4002c0054e7013c011a9596c5e not found: ID does not exist" Dec 05 19:22:37 crc kubenswrapper[4828]: I1205 19:22:37.696415 4828 scope.go:117] "RemoveContainer" containerID="66a9d63f1a422c9f2f7f288aa000d1412b17c8ad0429b541cde11d6358403eed" Dec 05 19:22:37 crc kubenswrapper[4828]: E1205 19:22:37.698171 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66a9d63f1a422c9f2f7f288aa000d1412b17c8ad0429b541cde11d6358403eed\": container with ID starting with 66a9d63f1a422c9f2f7f288aa000d1412b17c8ad0429b541cde11d6358403eed not found: ID does not exist" containerID="66a9d63f1a422c9f2f7f288aa000d1412b17c8ad0429b541cde11d6358403eed" Dec 05 19:22:37 crc kubenswrapper[4828]: I1205 19:22:37.698196 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66a9d63f1a422c9f2f7f288aa000d1412b17c8ad0429b541cde11d6358403eed"} err="failed to get container status \"66a9d63f1a422c9f2f7f288aa000d1412b17c8ad0429b541cde11d6358403eed\": rpc error: code = NotFound desc = could not find container \"66a9d63f1a422c9f2f7f288aa000d1412b17c8ad0429b541cde11d6358403eed\": container with ID starting with 66a9d63f1a422c9f2f7f288aa000d1412b17c8ad0429b541cde11d6358403eed not found: ID does not exist" Dec 05 19:22:37 crc kubenswrapper[4828]: E1205 19:22:37.711727 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="7ac00d92-7825-4462-ab12-8d2059085d24" Dec 05 19:22:37 crc kubenswrapper[4828]: E1205 19:22:37.712021 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="31b675bd-ec74-4876-91a0-95e4180e8cab" Dec 05 19:22:37 crc kubenswrapper[4828]: I1205 19:22:37.881117 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/821554f9-a51e-4a16-a053-b8bc18d93a9e-etc-swift\") pod \"swift-storage-0\" (UID: \"821554f9-a51e-4a16-a053-b8bc18d93a9e\") " pod="openstack/swift-storage-0" Dec 05 19:22:37 crc kubenswrapper[4828]: E1205 19:22:37.882019 4828 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 05 19:22:37 crc kubenswrapper[4828]: E1205 19:22:37.882035 4828 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 05 19:22:37 crc kubenswrapper[4828]: E1205 19:22:37.882101 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/821554f9-a51e-4a16-a053-b8bc18d93a9e-etc-swift podName:821554f9-a51e-4a16-a053-b8bc18d93a9e nodeName:}" failed. No retries permitted until 2025-12-05 19:22:45.882086909 +0000 UTC m=+1143.777309215 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/821554f9-a51e-4a16-a053-b8bc18d93a9e-etc-swift") pod "swift-storage-0" (UID: "821554f9-a51e-4a16-a053-b8bc18d93a9e") : configmap "swift-ring-files" not found Dec 05 19:22:38 crc kubenswrapper[4828]: I1205 19:22:38.089742 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-zcskw"] Dec 05 19:22:38 crc kubenswrapper[4828]: W1205 19:22:38.173599 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod784df3ad_b111_476d_ad5c_e10ee3e04b2f.slice/crio-b0faf1022dada597de0feb6e3f5edadbae6e2dce147a7d937798cb151d57fa9d WatchSource:0}: Error finding container b0faf1022dada597de0feb6e3f5edadbae6e2dce147a7d937798cb151d57fa9d: Status 404 returned error can't find the container with id b0faf1022dada597de0feb6e3f5edadbae6e2dce147a7d937798cb151d57fa9d Dec 05 19:22:38 crc kubenswrapper[4828]: I1205 19:22:38.175223 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-vs6cm"] Dec 05 19:22:38 crc kubenswrapper[4828]: I1205 19:22:38.514098 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:38 crc kubenswrapper[4828]: I1205 19:22:38.561321 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:38 crc kubenswrapper[4828]: I1205 19:22:38.719733 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a2debacb-a691-43ee-aa79-670bbec2a98a","Type":"ContainerStarted","Data":"636a0ddc97779918e58045fa5e979527dc187fbecc56030f3597dfc60dd218f3"} Dec 05 19:22:38 crc kubenswrapper[4828]: I1205 19:22:38.721347 4828 generic.go:334] "Generic (PLEG): container finished" podID="24ef607c-0e94-4e51-9fbe-54745774cd5e" containerID="dde742afe00bf4c7a4fdd4a14d6a10741756b07cca8f2f3240f2e3fe0d6f1edd" exitCode=0 Dec 05 19:22:38 crc kubenswrapper[4828]: I1205 19:22:38.721419 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-zcskw" event={"ID":"24ef607c-0e94-4e51-9fbe-54745774cd5e","Type":"ContainerDied","Data":"dde742afe00bf4c7a4fdd4a14d6a10741756b07cca8f2f3240f2e3fe0d6f1edd"} Dec 05 19:22:38 crc kubenswrapper[4828]: I1205 19:22:38.721477 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-zcskw" event={"ID":"24ef607c-0e94-4e51-9fbe-54745774cd5e","Type":"ContainerStarted","Data":"3000e4d4d5bc6183d70696a07f1e84b36a4c1fa166ea6cfcd6c57a607c464985"} Dec 05 19:22:38 crc kubenswrapper[4828]: I1205 19:22:38.724223 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7064b569-c206-4ed9-8f28-3e5a7e92bf79","Type":"ContainerStarted","Data":"ae6ee898f53136b535e81a320c78451ead3605db00eb71aa41ac268a0e8f22af"} Dec 05 19:22:38 crc kubenswrapper[4828]: I1205 19:22:38.727927 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vs6cm" event={"ID":"784df3ad-b111-476d-ad5c-e10ee3e04b2f","Type":"ContainerStarted","Data":"b0faf1022dada597de0feb6e3f5edadbae6e2dce147a7d937798cb151d57fa9d"} Dec 05 19:22:38 crc kubenswrapper[4828]: I1205 19:22:38.728241 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:38 crc kubenswrapper[4828]: E1205 19:22:38.729880 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="7ac00d92-7825-4462-ab12-8d2059085d24" Dec 05 19:22:38 crc kubenswrapper[4828]: I1205 19:22:38.768280 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=37.496245409 podStartE2EDuration="45.768259722s" podCreationTimestamp="2025-12-05 19:21:53 +0000 UTC" firstStartedPulling="2025-12-05 19:22:13.108338797 +0000 UTC m=+1111.003561103" lastFinishedPulling="2025-12-05 19:22:21.38035311 +0000 UTC m=+1119.275575416" observedRunningTime="2025-12-05 19:22:38.750072543 +0000 UTC m=+1136.645294889" watchObservedRunningTime="2025-12-05 19:22:38.768259722 +0000 UTC m=+1136.663482038" Dec 05 19:22:38 crc kubenswrapper[4828]: I1205 19:22:38.788195 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 05 19:22:38 crc kubenswrapper[4828]: I1205 19:22:38.819315 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=36.65210357 podStartE2EDuration="44.819296741s" podCreationTimestamp="2025-12-05 19:21:54 +0000 UTC" firstStartedPulling="2025-12-05 19:22:12.904520692 +0000 UTC m=+1110.799743008" lastFinishedPulling="2025-12-05 19:22:21.071713873 +0000 UTC m=+1118.966936179" observedRunningTime="2025-12-05 19:22:38.81709034 +0000 UTC m=+1136.712312656" watchObservedRunningTime="2025-12-05 19:22:38.819296741 +0000 UTC m=+1136.714519057" Dec 05 19:22:39 crc kubenswrapper[4828]: I1205 19:22:39.432440 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:39 crc kubenswrapper[4828]: E1205 19:22:39.435651 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="31b675bd-ec74-4876-91a0-95e4180e8cab" Dec 05 19:22:39 crc kubenswrapper[4828]: I1205 19:22:39.496362 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:39 crc kubenswrapper[4828]: I1205 19:22:39.737366 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-zcskw" event={"ID":"24ef607c-0e94-4e51-9fbe-54745774cd5e","Type":"ContainerStarted","Data":"866813dcad96d1097a1856cc762d6d4066aaf70e7d992657a70c2d3799454760"} Dec 05 19:22:39 crc kubenswrapper[4828]: I1205 19:22:39.738285 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:39 crc kubenswrapper[4828]: I1205 19:22:39.738651 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-zcskw" Dec 05 19:22:39 crc kubenswrapper[4828]: E1205 19:22:39.738724 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="31b675bd-ec74-4876-91a0-95e4180e8cab" Dec 05 19:22:39 crc kubenswrapper[4828]: E1205 19:22:39.739678 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="7ac00d92-7825-4462-ab12-8d2059085d24" Dec 05 19:22:39 crc kubenswrapper[4828]: I1205 19:22:39.784152 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-zcskw" podStartSLOduration=11.784134099 podStartE2EDuration="11.784134099s" podCreationTimestamp="2025-12-05 19:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:22:39.777183209 +0000 UTC m=+1137.672405535" watchObservedRunningTime="2025-12-05 19:22:39.784134099 +0000 UTC m=+1137.679356405" Dec 05 19:22:39 crc kubenswrapper[4828]: I1205 19:22:39.801051 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 05 19:22:40 crc kubenswrapper[4828]: E1205 19:22:40.751360 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="7ac00d92-7825-4462-ab12-8d2059085d24" Dec 05 19:22:40 crc kubenswrapper[4828]: E1205 19:22:40.752381 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="31b675bd-ec74-4876-91a0-95e4180e8cab" Dec 05 19:22:41 crc kubenswrapper[4828]: E1205 19:22:41.759678 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="31b675bd-ec74-4876-91a0-95e4180e8cab" Dec 05 19:22:42 crc kubenswrapper[4828]: I1205 19:22:42.769562 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vs6cm" event={"ID":"784df3ad-b111-476d-ad5c-e10ee3e04b2f","Type":"ContainerStarted","Data":"31b9f674a8a336bfcb204bf8e498f41081bf3e675223182aab1ec6825d004989"} Dec 05 19:22:42 crc kubenswrapper[4828]: I1205 19:22:42.790910 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-vs6cm" podStartSLOduration=9.311087047000001 podStartE2EDuration="12.790892091s" podCreationTimestamp="2025-12-05 19:22:30 +0000 UTC" firstStartedPulling="2025-12-05 19:22:38.17734983 +0000 UTC m=+1136.072572136" lastFinishedPulling="2025-12-05 19:22:41.657154874 +0000 UTC m=+1139.552377180" observedRunningTime="2025-12-05 19:22:42.790805289 +0000 UTC m=+1140.686027615" watchObservedRunningTime="2025-12-05 19:22:42.790892091 +0000 UTC m=+1140.686114397" Dec 05 19:22:44 crc kubenswrapper[4828]: I1205 19:22:44.138991 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7cb5889db5-zcskw" Dec 05 19:22:44 crc kubenswrapper[4828]: I1205 19:22:44.206214 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-sqmk4"] Dec 05 19:22:44 crc kubenswrapper[4828]: I1205 19:22:44.206432 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-sqmk4" podUID="6fb6a146-a614-4339-9e07-8892e82fed36" containerName="dnsmasq-dns" containerID="cri-o://99db7c7f7f963389068b38d85c6fb45ce161a06deee4cf10ec9a2b779dc9e744" gracePeriod=10 Dec 05 19:22:44 crc kubenswrapper[4828]: I1205 19:22:44.849629 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 05 19:22:44 crc kubenswrapper[4828]: I1205 19:22:44.850389 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 05 19:22:44 crc kubenswrapper[4828]: I1205 19:22:44.877641 4828 generic.go:334] "Generic (PLEG): container finished" podID="6fb6a146-a614-4339-9e07-8892e82fed36" containerID="99db7c7f7f963389068b38d85c6fb45ce161a06deee4cf10ec9a2b779dc9e744" exitCode=0 Dec 05 19:22:44 crc kubenswrapper[4828]: I1205 19:22:44.877733 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-sqmk4" event={"ID":"6fb6a146-a614-4339-9e07-8892e82fed36","Type":"ContainerDied","Data":"99db7c7f7f963389068b38d85c6fb45ce161a06deee4cf10ec9a2b779dc9e744"} Dec 05 19:22:44 crc kubenswrapper[4828]: I1205 19:22:44.972467 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 05 19:22:45 crc kubenswrapper[4828]: I1205 19:22:45.309280 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-sqmk4" Dec 05 19:22:45 crc kubenswrapper[4828]: I1205 19:22:45.403043 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl8nx\" (UniqueName: \"kubernetes.io/projected/6fb6a146-a614-4339-9e07-8892e82fed36-kube-api-access-gl8nx\") pod \"6fb6a146-a614-4339-9e07-8892e82fed36\" (UID: \"6fb6a146-a614-4339-9e07-8892e82fed36\") " Dec 05 19:22:45 crc kubenswrapper[4828]: I1205 19:22:45.403145 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fb6a146-a614-4339-9e07-8892e82fed36-config\") pod \"6fb6a146-a614-4339-9e07-8892e82fed36\" (UID: \"6fb6a146-a614-4339-9e07-8892e82fed36\") " Dec 05 19:22:45 crc kubenswrapper[4828]: I1205 19:22:45.403223 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fb6a146-a614-4339-9e07-8892e82fed36-dns-svc\") pod \"6fb6a146-a614-4339-9e07-8892e82fed36\" (UID: \"6fb6a146-a614-4339-9e07-8892e82fed36\") " Dec 05 19:22:45 crc kubenswrapper[4828]: I1205 19:22:45.409422 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fb6a146-a614-4339-9e07-8892e82fed36-kube-api-access-gl8nx" (OuterVolumeSpecName: "kube-api-access-gl8nx") pod "6fb6a146-a614-4339-9e07-8892e82fed36" (UID: "6fb6a146-a614-4339-9e07-8892e82fed36"). InnerVolumeSpecName "kube-api-access-gl8nx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:22:45 crc kubenswrapper[4828]: I1205 19:22:45.441033 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fb6a146-a614-4339-9e07-8892e82fed36-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6fb6a146-a614-4339-9e07-8892e82fed36" (UID: "6fb6a146-a614-4339-9e07-8892e82fed36"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:22:45 crc kubenswrapper[4828]: I1205 19:22:45.442982 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fb6a146-a614-4339-9e07-8892e82fed36-config" (OuterVolumeSpecName: "config") pod "6fb6a146-a614-4339-9e07-8892e82fed36" (UID: "6fb6a146-a614-4339-9e07-8892e82fed36"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:22:45 crc kubenswrapper[4828]: I1205 19:22:45.504949 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl8nx\" (UniqueName: \"kubernetes.io/projected/6fb6a146-a614-4339-9e07-8892e82fed36-kube-api-access-gl8nx\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:45 crc kubenswrapper[4828]: I1205 19:22:45.504991 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fb6a146-a614-4339-9e07-8892e82fed36-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:45 crc kubenswrapper[4828]: I1205 19:22:45.505006 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fb6a146-a614-4339-9e07-8892e82fed36-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:45 crc kubenswrapper[4828]: I1205 19:22:45.889102 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-sqmk4" Dec 05 19:22:45 crc kubenswrapper[4828]: I1205 19:22:45.889096 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-sqmk4" event={"ID":"6fb6a146-a614-4339-9e07-8892e82fed36","Type":"ContainerDied","Data":"470dccda9196093045054ac03a70186e29764ae3f79636bb171a178c870f31e4"} Dec 05 19:22:45 crc kubenswrapper[4828]: I1205 19:22:45.889167 4828 scope.go:117] "RemoveContainer" containerID="99db7c7f7f963389068b38d85c6fb45ce161a06deee4cf10ec9a2b779dc9e744" Dec 05 19:22:45 crc kubenswrapper[4828]: I1205 19:22:45.914991 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/821554f9-a51e-4a16-a053-b8bc18d93a9e-etc-swift\") pod \"swift-storage-0\" (UID: \"821554f9-a51e-4a16-a053-b8bc18d93a9e\") " pod="openstack/swift-storage-0" Dec 05 19:22:45 crc kubenswrapper[4828]: E1205 19:22:45.915482 4828 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 05 19:22:45 crc kubenswrapper[4828]: E1205 19:22:45.915518 4828 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 05 19:22:45 crc kubenswrapper[4828]: E1205 19:22:45.915580 4828 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/821554f9-a51e-4a16-a053-b8bc18d93a9e-etc-swift podName:821554f9-a51e-4a16-a053-b8bc18d93a9e nodeName:}" failed. No retries permitted until 2025-12-05 19:23:01.915558045 +0000 UTC m=+1159.810780361 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/821554f9-a51e-4a16-a053-b8bc18d93a9e-etc-swift") pod "swift-storage-0" (UID: "821554f9-a51e-4a16-a053-b8bc18d93a9e") : configmap "swift-ring-files" not found Dec 05 19:22:45 crc kubenswrapper[4828]: I1205 19:22:45.922932 4828 scope.go:117] "RemoveContainer" containerID="82a8e6fb46e85670cb8a93876f00c773f950d96831dbeff282acff94c12f0239" Dec 05 19:22:45 crc kubenswrapper[4828]: I1205 19:22:45.927865 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-sqmk4"] Dec 05 19:22:45 crc kubenswrapper[4828]: I1205 19:22:45.937447 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-sqmk4"] Dec 05 19:22:45 crc kubenswrapper[4828]: I1205 19:22:45.973340 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.238929 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.239975 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.322108 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.462625 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fb6a146-a614-4339-9e07-8892e82fed36" path="/var/lib/kubelet/pods/6fb6a146-a614-4339-9e07-8892e82fed36/volumes" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.463227 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-qdhdn"] Dec 05 19:22:46 crc kubenswrapper[4828]: E1205 19:22:46.463471 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb6a146-a614-4339-9e07-8892e82fed36" containerName="dnsmasq-dns" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.463492 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb6a146-a614-4339-9e07-8892e82fed36" containerName="dnsmasq-dns" Dec 05 19:22:46 crc kubenswrapper[4828]: E1205 19:22:46.463528 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb6a146-a614-4339-9e07-8892e82fed36" containerName="init" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.463536 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb6a146-a614-4339-9e07-8892e82fed36" containerName="init" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.463716 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fb6a146-a614-4339-9e07-8892e82fed36" containerName="dnsmasq-dns" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.464333 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qdhdn" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.481676 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5837-account-create-update-84sg4"] Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.483050 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5837-account-create-update-84sg4" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.485883 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.497898 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5837-account-create-update-84sg4"] Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.505767 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-qdhdn"] Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.623251 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-f8hmz"] Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.624656 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-f8hmz" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.630355 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-f8hmz"] Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.649364 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5p5l\" (UniqueName: \"kubernetes.io/projected/27482011-da42-44e5-85ba-bd369aefc5b6-kube-api-access-z5p5l\") pod \"keystone-5837-account-create-update-84sg4\" (UID: \"27482011-da42-44e5-85ba-bd369aefc5b6\") " pod="openstack/keystone-5837-account-create-update-84sg4" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.649639 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27482011-da42-44e5-85ba-bd369aefc5b6-operator-scripts\") pod \"keystone-5837-account-create-update-84sg4\" (UID: \"27482011-da42-44e5-85ba-bd369aefc5b6\") " pod="openstack/keystone-5837-account-create-update-84sg4" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.649770 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t82z6\" (UniqueName: \"kubernetes.io/projected/07e307bc-dae3-47b6-8864-0835bcf5844d-kube-api-access-t82z6\") pod \"keystone-db-create-qdhdn\" (UID: \"07e307bc-dae3-47b6-8864-0835bcf5844d\") " pod="openstack/keystone-db-create-qdhdn" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.649970 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07e307bc-dae3-47b6-8864-0835bcf5844d-operator-scripts\") pod \"keystone-db-create-qdhdn\" (UID: \"07e307bc-dae3-47b6-8864-0835bcf5844d\") " pod="openstack/keystone-db-create-qdhdn" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.720656 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-gqdhj"] Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.722112 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-gqdhj" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.728769 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.738952 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-gqdhj"] Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.751675 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f004af24-6047-4eea-a073-dde452ac983f-operator-scripts\") pod \"placement-db-create-f8hmz\" (UID: \"f004af24-6047-4eea-a073-dde452ac983f\") " pod="openstack/placement-db-create-f8hmz" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.751753 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5p5l\" (UniqueName: \"kubernetes.io/projected/27482011-da42-44e5-85ba-bd369aefc5b6-kube-api-access-z5p5l\") pod \"keystone-5837-account-create-update-84sg4\" (UID: \"27482011-da42-44e5-85ba-bd369aefc5b6\") " pod="openstack/keystone-5837-account-create-update-84sg4" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.751791 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27482011-da42-44e5-85ba-bd369aefc5b6-operator-scripts\") pod \"keystone-5837-account-create-update-84sg4\" (UID: \"27482011-da42-44e5-85ba-bd369aefc5b6\") " pod="openstack/keystone-5837-account-create-update-84sg4" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.751815 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t82z6\" (UniqueName: \"kubernetes.io/projected/07e307bc-dae3-47b6-8864-0835bcf5844d-kube-api-access-t82z6\") pod \"keystone-db-create-qdhdn\" (UID: \"07e307bc-dae3-47b6-8864-0835bcf5844d\") " pod="openstack/keystone-db-create-qdhdn" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.751880 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vglvb\" (UniqueName: \"kubernetes.io/projected/f004af24-6047-4eea-a073-dde452ac983f-kube-api-access-vglvb\") pod \"placement-db-create-f8hmz\" (UID: \"f004af24-6047-4eea-a073-dde452ac983f\") " pod="openstack/placement-db-create-f8hmz" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.751899 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07e307bc-dae3-47b6-8864-0835bcf5844d-operator-scripts\") pod \"keystone-db-create-qdhdn\" (UID: \"07e307bc-dae3-47b6-8864-0835bcf5844d\") " pod="openstack/keystone-db-create-qdhdn" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.752758 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07e307bc-dae3-47b6-8864-0835bcf5844d-operator-scripts\") pod \"keystone-db-create-qdhdn\" (UID: \"07e307bc-dae3-47b6-8864-0835bcf5844d\") " pod="openstack/keystone-db-create-qdhdn" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.753622 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27482011-da42-44e5-85ba-bd369aefc5b6-operator-scripts\") pod \"keystone-5837-account-create-update-84sg4\" (UID: \"27482011-da42-44e5-85ba-bd369aefc5b6\") " pod="openstack/keystone-5837-account-create-update-84sg4" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.765236 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-f21a-account-create-update-nq5bw"] Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.766530 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f21a-account-create-update-nq5bw" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.771905 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.781772 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f21a-account-create-update-nq5bw"] Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.784490 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t82z6\" (UniqueName: \"kubernetes.io/projected/07e307bc-dae3-47b6-8864-0835bcf5844d-kube-api-access-t82z6\") pod \"keystone-db-create-qdhdn\" (UID: \"07e307bc-dae3-47b6-8864-0835bcf5844d\") " pod="openstack/keystone-db-create-qdhdn" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.794093 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5p5l\" (UniqueName: \"kubernetes.io/projected/27482011-da42-44e5-85ba-bd369aefc5b6-kube-api-access-z5p5l\") pod \"keystone-5837-account-create-update-84sg4\" (UID: \"27482011-da42-44e5-85ba-bd369aefc5b6\") " pod="openstack/keystone-5837-account-create-update-84sg4" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.836319 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qdhdn" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.843414 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5837-account-create-update-84sg4" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.852872 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vglvb\" (UniqueName: \"kubernetes.io/projected/f004af24-6047-4eea-a073-dde452ac983f-kube-api-access-vglvb\") pod \"placement-db-create-f8hmz\" (UID: \"f004af24-6047-4eea-a073-dde452ac983f\") " pod="openstack/placement-db-create-f8hmz" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.853125 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ba9cffc-5e2b-44e9-966a-833ab0de45eb-config\") pod \"ovn-controller-metrics-gqdhj\" (UID: \"4ba9cffc-5e2b-44e9-966a-833ab0de45eb\") " pod="openstack/ovn-controller-metrics-gqdhj" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.853260 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f004af24-6047-4eea-a073-dde452ac983f-operator-scripts\") pod \"placement-db-create-f8hmz\" (UID: \"f004af24-6047-4eea-a073-dde452ac983f\") " pod="openstack/placement-db-create-f8hmz" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.853358 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4ba9cffc-5e2b-44e9-966a-833ab0de45eb-ovs-rundir\") pod \"ovn-controller-metrics-gqdhj\" (UID: \"4ba9cffc-5e2b-44e9-966a-833ab0de45eb\") " pod="openstack/ovn-controller-metrics-gqdhj" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.853492 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4ba9cffc-5e2b-44e9-966a-833ab0de45eb-ovn-rundir\") pod \"ovn-controller-metrics-gqdhj\" (UID: \"4ba9cffc-5e2b-44e9-966a-833ab0de45eb\") " pod="openstack/ovn-controller-metrics-gqdhj" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.853620 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnkfc\" (UniqueName: \"kubernetes.io/projected/4ba9cffc-5e2b-44e9-966a-833ab0de45eb-kube-api-access-bnkfc\") pod \"ovn-controller-metrics-gqdhj\" (UID: \"4ba9cffc-5e2b-44e9-966a-833ab0de45eb\") " pod="openstack/ovn-controller-metrics-gqdhj" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.853719 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ba9cffc-5e2b-44e9-966a-833ab0de45eb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-gqdhj\" (UID: \"4ba9cffc-5e2b-44e9-966a-833ab0de45eb\") " pod="openstack/ovn-controller-metrics-gqdhj" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.853810 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba9cffc-5e2b-44e9-966a-833ab0de45eb-combined-ca-bundle\") pod \"ovn-controller-metrics-gqdhj\" (UID: \"4ba9cffc-5e2b-44e9-966a-833ab0de45eb\") " pod="openstack/ovn-controller-metrics-gqdhj" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.854459 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f004af24-6047-4eea-a073-dde452ac983f-operator-scripts\") pod \"placement-db-create-f8hmz\" (UID: \"f004af24-6047-4eea-a073-dde452ac983f\") " pod="openstack/placement-db-create-f8hmz" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.907933 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vglvb\" (UniqueName: \"kubernetes.io/projected/f004af24-6047-4eea-a073-dde452ac983f-kube-api-access-vglvb\") pod \"placement-db-create-f8hmz\" (UID: \"f004af24-6047-4eea-a073-dde452ac983f\") " pod="openstack/placement-db-create-f8hmz" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.914389 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-kz6bj"] Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.915896 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-kz6bj" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.919139 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.949858 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-kz6bj"] Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.951332 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-f8hmz" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.955090 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4ba9cffc-5e2b-44e9-966a-833ab0de45eb-ovs-rundir\") pod \"ovn-controller-metrics-gqdhj\" (UID: \"4ba9cffc-5e2b-44e9-966a-833ab0de45eb\") " pod="openstack/ovn-controller-metrics-gqdhj" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.955164 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhbdf\" (UniqueName: \"kubernetes.io/projected/5fb3621e-b696-4551-a40a-ed30e961d2dc-kube-api-access-zhbdf\") pod \"placement-f21a-account-create-update-nq5bw\" (UID: \"5fb3621e-b696-4551-a40a-ed30e961d2dc\") " pod="openstack/placement-f21a-account-create-update-nq5bw" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.955203 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4ba9cffc-5e2b-44e9-966a-833ab0de45eb-ovn-rundir\") pod \"ovn-controller-metrics-gqdhj\" (UID: \"4ba9cffc-5e2b-44e9-966a-833ab0de45eb\") " pod="openstack/ovn-controller-metrics-gqdhj" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.955221 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnkfc\" (UniqueName: \"kubernetes.io/projected/4ba9cffc-5e2b-44e9-966a-833ab0de45eb-kube-api-access-bnkfc\") pod \"ovn-controller-metrics-gqdhj\" (UID: \"4ba9cffc-5e2b-44e9-966a-833ab0de45eb\") " pod="openstack/ovn-controller-metrics-gqdhj" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.955240 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ba9cffc-5e2b-44e9-966a-833ab0de45eb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-gqdhj\" (UID: \"4ba9cffc-5e2b-44e9-966a-833ab0de45eb\") " pod="openstack/ovn-controller-metrics-gqdhj" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.955256 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba9cffc-5e2b-44e9-966a-833ab0de45eb-combined-ca-bundle\") pod \"ovn-controller-metrics-gqdhj\" (UID: \"4ba9cffc-5e2b-44e9-966a-833ab0de45eb\") " pod="openstack/ovn-controller-metrics-gqdhj" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.955321 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fb3621e-b696-4551-a40a-ed30e961d2dc-operator-scripts\") pod \"placement-f21a-account-create-update-nq5bw\" (UID: \"5fb3621e-b696-4551-a40a-ed30e961d2dc\") " pod="openstack/placement-f21a-account-create-update-nq5bw" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.955358 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ba9cffc-5e2b-44e9-966a-833ab0de45eb-config\") pod \"ovn-controller-metrics-gqdhj\" (UID: \"4ba9cffc-5e2b-44e9-966a-833ab0de45eb\") " pod="openstack/ovn-controller-metrics-gqdhj" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.955402 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4ba9cffc-5e2b-44e9-966a-833ab0de45eb-ovs-rundir\") pod \"ovn-controller-metrics-gqdhj\" (UID: \"4ba9cffc-5e2b-44e9-966a-833ab0de45eb\") " pod="openstack/ovn-controller-metrics-gqdhj" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.955707 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4ba9cffc-5e2b-44e9-966a-833ab0de45eb-ovn-rundir\") pod \"ovn-controller-metrics-gqdhj\" (UID: \"4ba9cffc-5e2b-44e9-966a-833ab0de45eb\") " pod="openstack/ovn-controller-metrics-gqdhj" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.956024 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ba9cffc-5e2b-44e9-966a-833ab0de45eb-config\") pod \"ovn-controller-metrics-gqdhj\" (UID: \"4ba9cffc-5e2b-44e9-966a-833ab0de45eb\") " pod="openstack/ovn-controller-metrics-gqdhj" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.960453 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ba9cffc-5e2b-44e9-966a-833ab0de45eb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-gqdhj\" (UID: \"4ba9cffc-5e2b-44e9-966a-833ab0de45eb\") " pod="openstack/ovn-controller-metrics-gqdhj" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.964382 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba9cffc-5e2b-44e9-966a-833ab0de45eb-combined-ca-bundle\") pod \"ovn-controller-metrics-gqdhj\" (UID: \"4ba9cffc-5e2b-44e9-966a-833ab0de45eb\") " pod="openstack/ovn-controller-metrics-gqdhj" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.978057 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnkfc\" (UniqueName: \"kubernetes.io/projected/4ba9cffc-5e2b-44e9-966a-833ab0de45eb-kube-api-access-bnkfc\") pod \"ovn-controller-metrics-gqdhj\" (UID: \"4ba9cffc-5e2b-44e9-966a-833ab0de45eb\") " pod="openstack/ovn-controller-metrics-gqdhj" Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.983865 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-mbh7g"] Dec 05 19:22:46 crc kubenswrapper[4828]: I1205 19:22:46.984916 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mbh7g" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.014507 4828 generic.go:334] "Generic (PLEG): container finished" podID="e21a851c-5179-4365-8e72-5dea16be90cc" containerID="c399f137e8476671322f5c3df219c0542ea3413c956c623f62ca825d6f30ae5d" exitCode=0 Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.014584 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e21a851c-5179-4365-8e72-5dea16be90cc","Type":"ContainerDied","Data":"c399f137e8476671322f5c3df219c0542ea3413c956c623f62ca825d6f30ae5d"} Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.030119 4828 generic.go:334] "Generic (PLEG): container finished" podID="50db8d67-b1c6-4165-a526-8149092660ed" containerID="e065a0fe74f9c385bcc5b2d72a845ca0945938e9796cd9e236af91876f1347fa" exitCode=0 Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.030583 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"50db8d67-b1c6-4165-a526-8149092660ed","Type":"ContainerDied","Data":"e065a0fe74f9c385bcc5b2d72a845ca0945938e9796cd9e236af91876f1347fa"} Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.037220 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-gqdhj" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.041915 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mbh7g"] Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.057174 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhbdf\" (UniqueName: \"kubernetes.io/projected/5fb3621e-b696-4551-a40a-ed30e961d2dc-kube-api-access-zhbdf\") pod \"placement-f21a-account-create-update-nq5bw\" (UID: \"5fb3621e-b696-4551-a40a-ed30e961d2dc\") " pod="openstack/placement-f21a-account-create-update-nq5bw" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.057221 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c33a25d-52b3-4d43-8998-3668344008a3-config\") pod \"dnsmasq-dns-74f6f696b9-kz6bj\" (UID: \"3c33a25d-52b3-4d43-8998-3668344008a3\") " pod="openstack/dnsmasq-dns-74f6f696b9-kz6bj" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.057329 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c33a25d-52b3-4d43-8998-3668344008a3-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-kz6bj\" (UID: \"3c33a25d-52b3-4d43-8998-3668344008a3\") " pod="openstack/dnsmasq-dns-74f6f696b9-kz6bj" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.057369 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c33a25d-52b3-4d43-8998-3668344008a3-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-kz6bj\" (UID: \"3c33a25d-52b3-4d43-8998-3668344008a3\") " pod="openstack/dnsmasq-dns-74f6f696b9-kz6bj" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.057410 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fb3621e-b696-4551-a40a-ed30e961d2dc-operator-scripts\") pod \"placement-f21a-account-create-update-nq5bw\" (UID: \"5fb3621e-b696-4551-a40a-ed30e961d2dc\") " pod="openstack/placement-f21a-account-create-update-nq5bw" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.057432 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcz9b\" (UniqueName: \"kubernetes.io/projected/3c33a25d-52b3-4d43-8998-3668344008a3-kube-api-access-gcz9b\") pod \"dnsmasq-dns-74f6f696b9-kz6bj\" (UID: \"3c33a25d-52b3-4d43-8998-3668344008a3\") " pod="openstack/dnsmasq-dns-74f6f696b9-kz6bj" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.058722 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fb3621e-b696-4551-a40a-ed30e961d2dc-operator-scripts\") pod \"placement-f21a-account-create-update-nq5bw\" (UID: \"5fb3621e-b696-4551-a40a-ed30e961d2dc\") " pod="openstack/placement-f21a-account-create-update-nq5bw" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.067112 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-3d05-account-create-update-9xpln"] Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.068395 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3d05-account-create-update-9xpln" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.074899 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.083001 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3d05-account-create-update-9xpln"] Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.083640 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhbdf\" (UniqueName: \"kubernetes.io/projected/5fb3621e-b696-4551-a40a-ed30e961d2dc-kube-api-access-zhbdf\") pod \"placement-f21a-account-create-update-nq5bw\" (UID: \"5fb3621e-b696-4551-a40a-ed30e961d2dc\") " pod="openstack/placement-f21a-account-create-update-nq5bw" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.159849 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81c5894e-e6ac-4192-a24a-b7c8375c47e8-operator-scripts\") pod \"glance-db-create-mbh7g\" (UID: \"81c5894e-e6ac-4192-a24a-b7c8375c47e8\") " pod="openstack/glance-db-create-mbh7g" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.160006 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c33a25d-52b3-4d43-8998-3668344008a3-config\") pod \"dnsmasq-dns-74f6f696b9-kz6bj\" (UID: \"3c33a25d-52b3-4d43-8998-3668344008a3\") " pod="openstack/dnsmasq-dns-74f6f696b9-kz6bj" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.160122 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq72w\" (UniqueName: \"kubernetes.io/projected/81c5894e-e6ac-4192-a24a-b7c8375c47e8-kube-api-access-dq72w\") pod \"glance-db-create-mbh7g\" (UID: \"81c5894e-e6ac-4192-a24a-b7c8375c47e8\") " pod="openstack/glance-db-create-mbh7g" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.160274 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2p8p\" (UniqueName: \"kubernetes.io/projected/34cf8c2c-f21e-4a27-a777-52b69bc7164b-kube-api-access-j2p8p\") pod \"glance-3d05-account-create-update-9xpln\" (UID: \"34cf8c2c-f21e-4a27-a777-52b69bc7164b\") " pod="openstack/glance-3d05-account-create-update-9xpln" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.160323 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34cf8c2c-f21e-4a27-a777-52b69bc7164b-operator-scripts\") pod \"glance-3d05-account-create-update-9xpln\" (UID: \"34cf8c2c-f21e-4a27-a777-52b69bc7164b\") " pod="openstack/glance-3d05-account-create-update-9xpln" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.160363 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c33a25d-52b3-4d43-8998-3668344008a3-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-kz6bj\" (UID: \"3c33a25d-52b3-4d43-8998-3668344008a3\") " pod="openstack/dnsmasq-dns-74f6f696b9-kz6bj" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.160486 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c33a25d-52b3-4d43-8998-3668344008a3-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-kz6bj\" (UID: \"3c33a25d-52b3-4d43-8998-3668344008a3\") " pod="openstack/dnsmasq-dns-74f6f696b9-kz6bj" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.160506 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcz9b\" (UniqueName: \"kubernetes.io/projected/3c33a25d-52b3-4d43-8998-3668344008a3-kube-api-access-gcz9b\") pod \"dnsmasq-dns-74f6f696b9-kz6bj\" (UID: \"3c33a25d-52b3-4d43-8998-3668344008a3\") " pod="openstack/dnsmasq-dns-74f6f696b9-kz6bj" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.168739 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c33a25d-52b3-4d43-8998-3668344008a3-config\") pod \"dnsmasq-dns-74f6f696b9-kz6bj\" (UID: \"3c33a25d-52b3-4d43-8998-3668344008a3\") " pod="openstack/dnsmasq-dns-74f6f696b9-kz6bj" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.170984 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c33a25d-52b3-4d43-8998-3668344008a3-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-kz6bj\" (UID: \"3c33a25d-52b3-4d43-8998-3668344008a3\") " pod="openstack/dnsmasq-dns-74f6f696b9-kz6bj" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.171575 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c33a25d-52b3-4d43-8998-3668344008a3-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-kz6bj\" (UID: \"3c33a25d-52b3-4d43-8998-3668344008a3\") " pod="openstack/dnsmasq-dns-74f6f696b9-kz6bj" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.179488 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcz9b\" (UniqueName: \"kubernetes.io/projected/3c33a25d-52b3-4d43-8998-3668344008a3-kube-api-access-gcz9b\") pod \"dnsmasq-dns-74f6f696b9-kz6bj\" (UID: \"3c33a25d-52b3-4d43-8998-3668344008a3\") " pod="openstack/dnsmasq-dns-74f6f696b9-kz6bj" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.221390 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-kz6bj"] Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.222679 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-kz6bj" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.259235 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-rd9jv"] Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.261218 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f21a-account-create-update-nq5bw" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.261510 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-rd9jv" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.262971 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81c5894e-e6ac-4192-a24a-b7c8375c47e8-operator-scripts\") pod \"glance-db-create-mbh7g\" (UID: \"81c5894e-e6ac-4192-a24a-b7c8375c47e8\") " pod="openstack/glance-db-create-mbh7g" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.263060 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq72w\" (UniqueName: \"kubernetes.io/projected/81c5894e-e6ac-4192-a24a-b7c8375c47e8-kube-api-access-dq72w\") pod \"glance-db-create-mbh7g\" (UID: \"81c5894e-e6ac-4192-a24a-b7c8375c47e8\") " pod="openstack/glance-db-create-mbh7g" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.263129 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2p8p\" (UniqueName: \"kubernetes.io/projected/34cf8c2c-f21e-4a27-a777-52b69bc7164b-kube-api-access-j2p8p\") pod \"glance-3d05-account-create-update-9xpln\" (UID: \"34cf8c2c-f21e-4a27-a777-52b69bc7164b\") " pod="openstack/glance-3d05-account-create-update-9xpln" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.263161 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34cf8c2c-f21e-4a27-a777-52b69bc7164b-operator-scripts\") pod \"glance-3d05-account-create-update-9xpln\" (UID: \"34cf8c2c-f21e-4a27-a777-52b69bc7164b\") " pod="openstack/glance-3d05-account-create-update-9xpln" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.264096 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34cf8c2c-f21e-4a27-a777-52b69bc7164b-operator-scripts\") pod \"glance-3d05-account-create-update-9xpln\" (UID: \"34cf8c2c-f21e-4a27-a777-52b69bc7164b\") " pod="openstack/glance-3d05-account-create-update-9xpln" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.264866 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81c5894e-e6ac-4192-a24a-b7c8375c47e8-operator-scripts\") pod \"glance-db-create-mbh7g\" (UID: \"81c5894e-e6ac-4192-a24a-b7c8375c47e8\") " pod="openstack/glance-db-create-mbh7g" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.270536 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.304203 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-rd9jv"] Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.305817 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq72w\" (UniqueName: \"kubernetes.io/projected/81c5894e-e6ac-4192-a24a-b7c8375c47e8-kube-api-access-dq72w\") pod \"glance-db-create-mbh7g\" (UID: \"81c5894e-e6ac-4192-a24a-b7c8375c47e8\") " pod="openstack/glance-db-create-mbh7g" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.306326 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2p8p\" (UniqueName: \"kubernetes.io/projected/34cf8c2c-f21e-4a27-a777-52b69bc7164b-kube-api-access-j2p8p\") pod \"glance-3d05-account-create-update-9xpln\" (UID: \"34cf8c2c-f21e-4a27-a777-52b69bc7164b\") " pod="openstack/glance-3d05-account-create-update-9xpln" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.321269 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mbh7g" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.336399 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.364857 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0969276-79d6-4176-9211-af61074920b1-config\") pod \"dnsmasq-dns-698758b865-rd9jv\" (UID: \"b0969276-79d6-4176-9211-af61074920b1\") " pod="openstack/dnsmasq-dns-698758b865-rd9jv" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.364946 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0969276-79d6-4176-9211-af61074920b1-dns-svc\") pod \"dnsmasq-dns-698758b865-rd9jv\" (UID: \"b0969276-79d6-4176-9211-af61074920b1\") " pod="openstack/dnsmasq-dns-698758b865-rd9jv" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.364997 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0969276-79d6-4176-9211-af61074920b1-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-rd9jv\" (UID: \"b0969276-79d6-4176-9211-af61074920b1\") " pod="openstack/dnsmasq-dns-698758b865-rd9jv" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.365033 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0969276-79d6-4176-9211-af61074920b1-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-rd9jv\" (UID: \"b0969276-79d6-4176-9211-af61074920b1\") " pod="openstack/dnsmasq-dns-698758b865-rd9jv" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.365065 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j79jf\" (UniqueName: \"kubernetes.io/projected/b0969276-79d6-4176-9211-af61074920b1-kube-api-access-j79jf\") pod \"dnsmasq-dns-698758b865-rd9jv\" (UID: \"b0969276-79d6-4176-9211-af61074920b1\") " pod="openstack/dnsmasq-dns-698758b865-rd9jv" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.412357 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3d05-account-create-update-9xpln" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.467412 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j79jf\" (UniqueName: \"kubernetes.io/projected/b0969276-79d6-4176-9211-af61074920b1-kube-api-access-j79jf\") pod \"dnsmasq-dns-698758b865-rd9jv\" (UID: \"b0969276-79d6-4176-9211-af61074920b1\") " pod="openstack/dnsmasq-dns-698758b865-rd9jv" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.467561 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0969276-79d6-4176-9211-af61074920b1-config\") pod \"dnsmasq-dns-698758b865-rd9jv\" (UID: \"b0969276-79d6-4176-9211-af61074920b1\") " pod="openstack/dnsmasq-dns-698758b865-rd9jv" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.467625 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0969276-79d6-4176-9211-af61074920b1-dns-svc\") pod \"dnsmasq-dns-698758b865-rd9jv\" (UID: \"b0969276-79d6-4176-9211-af61074920b1\") " pod="openstack/dnsmasq-dns-698758b865-rd9jv" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.467675 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0969276-79d6-4176-9211-af61074920b1-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-rd9jv\" (UID: \"b0969276-79d6-4176-9211-af61074920b1\") " pod="openstack/dnsmasq-dns-698758b865-rd9jv" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.467735 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0969276-79d6-4176-9211-af61074920b1-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-rd9jv\" (UID: \"b0969276-79d6-4176-9211-af61074920b1\") " pod="openstack/dnsmasq-dns-698758b865-rd9jv" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.468838 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0969276-79d6-4176-9211-af61074920b1-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-rd9jv\" (UID: \"b0969276-79d6-4176-9211-af61074920b1\") " pod="openstack/dnsmasq-dns-698758b865-rd9jv" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.470240 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0969276-79d6-4176-9211-af61074920b1-config\") pod \"dnsmasq-dns-698758b865-rd9jv\" (UID: \"b0969276-79d6-4176-9211-af61074920b1\") " pod="openstack/dnsmasq-dns-698758b865-rd9jv" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.470964 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0969276-79d6-4176-9211-af61074920b1-dns-svc\") pod \"dnsmasq-dns-698758b865-rd9jv\" (UID: \"b0969276-79d6-4176-9211-af61074920b1\") " pod="openstack/dnsmasq-dns-698758b865-rd9jv" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.471329 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0969276-79d6-4176-9211-af61074920b1-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-rd9jv\" (UID: \"b0969276-79d6-4176-9211-af61074920b1\") " pod="openstack/dnsmasq-dns-698758b865-rd9jv" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.496203 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j79jf\" (UniqueName: \"kubernetes.io/projected/b0969276-79d6-4176-9211-af61074920b1-kube-api-access-j79jf\") pod \"dnsmasq-dns-698758b865-rd9jv\" (UID: \"b0969276-79d6-4176-9211-af61074920b1\") " pod="openstack/dnsmasq-dns-698758b865-rd9jv" Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.530853 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-qdhdn"] Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.572903 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-f8hmz"] Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.620791 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5837-account-create-update-84sg4"] Dec 05 19:22:47 crc kubenswrapper[4828]: W1205 19:22:47.633651 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27482011_da42_44e5_85ba_bd369aefc5b6.slice/crio-a7dae7253702aef7e8e5373dd42cd6b5243dbbc3349c6b7e12c63299e39bfa41 WatchSource:0}: Error finding container a7dae7253702aef7e8e5373dd42cd6b5243dbbc3349c6b7e12c63299e39bfa41: Status 404 returned error can't find the container with id a7dae7253702aef7e8e5373dd42cd6b5243dbbc3349c6b7e12c63299e39bfa41 Dec 05 19:22:47 crc kubenswrapper[4828]: I1205 19:22:47.640622 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-rd9jv" Dec 05 19:22:48 crc kubenswrapper[4828]: I1205 19:22:48.032690 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-gqdhj"] Dec 05 19:22:48 crc kubenswrapper[4828]: I1205 19:22:48.102011 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f21a-account-create-update-nq5bw"] Dec 05 19:22:48 crc kubenswrapper[4828]: I1205 19:22:48.112836 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-f8hmz" event={"ID":"f004af24-6047-4eea-a073-dde452ac983f","Type":"ContainerStarted","Data":"820a041d32a3ed8d67e6cdd20721207add039ba12349efe8e6d09a215eb6abeb"} Dec 05 19:22:48 crc kubenswrapper[4828]: I1205 19:22:48.113798 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-kz6bj"] Dec 05 19:22:48 crc kubenswrapper[4828]: I1205 19:22:48.141579 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e21a851c-5179-4365-8e72-5dea16be90cc","Type":"ContainerStarted","Data":"dc1dc77b3357801ba19f4888d4499f413ec8a380f40145c8bb2cec1a9cbb5018"} Dec 05 19:22:48 crc kubenswrapper[4828]: I1205 19:22:48.141933 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mbh7g"] Dec 05 19:22:48 crc kubenswrapper[4828]: I1205 19:22:48.142186 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 05 19:22:48 crc kubenswrapper[4828]: I1205 19:22:48.153806 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qdhdn" event={"ID":"07e307bc-dae3-47b6-8864-0835bcf5844d","Type":"ContainerStarted","Data":"39ecab0f26139df852878cb873561f00192cf71ee372a8882ef598928f050af6"} Dec 05 19:22:48 crc kubenswrapper[4828]: I1205 19:22:48.178731 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.00185993 podStartE2EDuration="57.178715991s" podCreationTimestamp="2025-12-05 19:21:51 +0000 UTC" firstStartedPulling="2025-12-05 19:21:53.746584639 +0000 UTC m=+1091.641806945" lastFinishedPulling="2025-12-05 19:22:12.9234407 +0000 UTC m=+1110.818663006" observedRunningTime="2025-12-05 19:22:48.178409853 +0000 UTC m=+1146.073632149" watchObservedRunningTime="2025-12-05 19:22:48.178715991 +0000 UTC m=+1146.073938297" Dec 05 19:22:48 crc kubenswrapper[4828]: I1205 19:22:48.187415 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"50db8d67-b1c6-4165-a526-8149092660ed","Type":"ContainerStarted","Data":"faa047bd8fabc68e7f3380ac9d548802a2ae382000c17bd0135f658ee46ff4ae"} Dec 05 19:22:48 crc kubenswrapper[4828]: I1205 19:22:48.188074 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:22:48 crc kubenswrapper[4828]: I1205 19:22:48.189926 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5837-account-create-update-84sg4" event={"ID":"27482011-da42-44e5-85ba-bd369aefc5b6","Type":"ContainerStarted","Data":"a7dae7253702aef7e8e5373dd42cd6b5243dbbc3349c6b7e12c63299e39bfa41"} Dec 05 19:22:48 crc kubenswrapper[4828]: I1205 19:22:48.233435 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.390287614 podStartE2EDuration="57.23341499s" podCreationTimestamp="2025-12-05 19:21:51 +0000 UTC" firstStartedPulling="2025-12-05 19:21:54.051118405 +0000 UTC m=+1091.946340711" lastFinishedPulling="2025-12-05 19:22:12.894245781 +0000 UTC m=+1110.789468087" observedRunningTime="2025-12-05 19:22:48.224681961 +0000 UTC m=+1146.119904267" watchObservedRunningTime="2025-12-05 19:22:48.23341499 +0000 UTC m=+1146.128637296" Dec 05 19:22:48 crc kubenswrapper[4828]: W1205 19:22:48.316812 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34cf8c2c_f21e_4a27_a777_52b69bc7164b.slice/crio-c9b0ce15153af9eb26686b93864b04105ce7db92b9299f1cfa1fd96d2a03539a WatchSource:0}: Error finding container c9b0ce15153af9eb26686b93864b04105ce7db92b9299f1cfa1fd96d2a03539a: Status 404 returned error can't find the container with id c9b0ce15153af9eb26686b93864b04105ce7db92b9299f1cfa1fd96d2a03539a Dec 05 19:22:48 crc kubenswrapper[4828]: I1205 19:22:48.329570 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3d05-account-create-update-9xpln"] Dec 05 19:22:48 crc kubenswrapper[4828]: I1205 19:22:48.560441 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-rd9jv"] Dec 05 19:22:48 crc kubenswrapper[4828]: W1205 19:22:48.564966 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0969276_79d6_4176_9211_af61074920b1.slice/crio-c56f794452e21a624be952df009552d5e0142283f09830d851d7c78a2801abd0 WatchSource:0}: Error finding container c56f794452e21a624be952df009552d5e0142283f09830d851d7c78a2801abd0: Status 404 returned error can't find the container with id c56f794452e21a624be952df009552d5e0142283f09830d851d7c78a2801abd0 Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.197738 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-gqdhj" event={"ID":"4ba9cffc-5e2b-44e9-966a-833ab0de45eb","Type":"ContainerStarted","Data":"25b32082fc47e8304f1074cd50e2a5b1c733030bb71b5dec86a31a717534ae29"} Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.199707 4828 generic.go:334] "Generic (PLEG): container finished" podID="07e307bc-dae3-47b6-8864-0835bcf5844d" containerID="2349e6ebd4947fb1bf8083fcae7eda8c2df8c769a5dcda7b1816e37ac2655f70" exitCode=0 Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.199768 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qdhdn" event={"ID":"07e307bc-dae3-47b6-8864-0835bcf5844d","Type":"ContainerDied","Data":"2349e6ebd4947fb1bf8083fcae7eda8c2df8c769a5dcda7b1816e37ac2655f70"} Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.201454 4828 generic.go:334] "Generic (PLEG): container finished" podID="b0969276-79d6-4176-9211-af61074920b1" containerID="dca3c9e055b5bf22d9c0c927686af5ab716d562b94ac222c8737863bc94a33e1" exitCode=0 Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.201507 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rd9jv" event={"ID":"b0969276-79d6-4176-9211-af61074920b1","Type":"ContainerDied","Data":"dca3c9e055b5bf22d9c0c927686af5ab716d562b94ac222c8737863bc94a33e1"} Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.201527 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rd9jv" event={"ID":"b0969276-79d6-4176-9211-af61074920b1","Type":"ContainerStarted","Data":"c56f794452e21a624be952df009552d5e0142283f09830d851d7c78a2801abd0"} Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.203046 4828 generic.go:334] "Generic (PLEG): container finished" podID="f004af24-6047-4eea-a073-dde452ac983f" containerID="1fe2de0a4295b113cb59f2d8443d4d2b2e1b205064a22104485c7e07bd11ce6a" exitCode=0 Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.203170 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-f8hmz" event={"ID":"f004af24-6047-4eea-a073-dde452ac983f","Type":"ContainerDied","Data":"1fe2de0a4295b113cb59f2d8443d4d2b2e1b205064a22104485c7e07bd11ce6a"} Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.215144 4828 generic.go:334] "Generic (PLEG): container finished" podID="5fb3621e-b696-4551-a40a-ed30e961d2dc" containerID="6c7129beceae54ee9d411b9f5f637c07bd22440adfeaceddc507b3e36ed67d2f" exitCode=0 Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.215507 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f21a-account-create-update-nq5bw" event={"ID":"5fb3621e-b696-4551-a40a-ed30e961d2dc","Type":"ContainerDied","Data":"6c7129beceae54ee9d411b9f5f637c07bd22440adfeaceddc507b3e36ed67d2f"} Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.215533 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f21a-account-create-update-nq5bw" event={"ID":"5fb3621e-b696-4551-a40a-ed30e961d2dc","Type":"ContainerStarted","Data":"c1926b1cf4bb5c8be5f38039fe686a90a279bd52c34235e08fc3cff2248e9c51"} Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.241535 4828 generic.go:334] "Generic (PLEG): container finished" podID="34cf8c2c-f21e-4a27-a777-52b69bc7164b" containerID="3afdd10b9ae72ea7e1e19301113947ebcb68a96e5b0a8253230375db1ab95c0d" exitCode=0 Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.241667 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3d05-account-create-update-9xpln" event={"ID":"34cf8c2c-f21e-4a27-a777-52b69bc7164b","Type":"ContainerDied","Data":"3afdd10b9ae72ea7e1e19301113947ebcb68a96e5b0a8253230375db1ab95c0d"} Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.241690 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3d05-account-create-update-9xpln" event={"ID":"34cf8c2c-f21e-4a27-a777-52b69bc7164b","Type":"ContainerStarted","Data":"c9b0ce15153af9eb26686b93864b04105ce7db92b9299f1cfa1fd96d2a03539a"} Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.247896 4828 generic.go:334] "Generic (PLEG): container finished" podID="27482011-da42-44e5-85ba-bd369aefc5b6" containerID="f84b11559d84b6f44aabe84c0fce64fa62929d53bb1388d57cae059589ba787f" exitCode=0 Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.247973 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5837-account-create-update-84sg4" event={"ID":"27482011-da42-44e5-85ba-bd369aefc5b6","Type":"ContainerDied","Data":"f84b11559d84b6f44aabe84c0fce64fa62929d53bb1388d57cae059589ba787f"} Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.258085 4828 generic.go:334] "Generic (PLEG): container finished" podID="81c5894e-e6ac-4192-a24a-b7c8375c47e8" containerID="72dccee6dfe9227300223ae3fcf39a27d3cc60e44c0517d36cae50f91590db18" exitCode=0 Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.258171 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mbh7g" event={"ID":"81c5894e-e6ac-4192-a24a-b7c8375c47e8","Type":"ContainerDied","Data":"72dccee6dfe9227300223ae3fcf39a27d3cc60e44c0517d36cae50f91590db18"} Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.258194 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mbh7g" event={"ID":"81c5894e-e6ac-4192-a24a-b7c8375c47e8","Type":"ContainerStarted","Data":"b5dadc446a101dda9bf2a90295afb8ca7143d56ec224508fb485c6d02054eb87"} Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.282249 4828 generic.go:334] "Generic (PLEG): container finished" podID="3c33a25d-52b3-4d43-8998-3668344008a3" containerID="76a7a307aed024fc8bd70f162b1047e89b8ed4d32a8897a7e8faa19f0801453f" exitCode=0 Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.283019 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-kz6bj" event={"ID":"3c33a25d-52b3-4d43-8998-3668344008a3","Type":"ContainerDied","Data":"76a7a307aed024fc8bd70f162b1047e89b8ed4d32a8897a7e8faa19f0801453f"} Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.283049 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-kz6bj" event={"ID":"3c33a25d-52b3-4d43-8998-3668344008a3","Type":"ContainerStarted","Data":"f7ad0e10af4b7d7501f929165e06d8773234d8afa01c9915c213686b392f1e08"} Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.736193 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-kz6bj" Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.746635 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c33a25d-52b3-4d43-8998-3668344008a3-config\") pod \"3c33a25d-52b3-4d43-8998-3668344008a3\" (UID: \"3c33a25d-52b3-4d43-8998-3668344008a3\") " Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.746687 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c33a25d-52b3-4d43-8998-3668344008a3-ovsdbserver-nb\") pod \"3c33a25d-52b3-4d43-8998-3668344008a3\" (UID: \"3c33a25d-52b3-4d43-8998-3668344008a3\") " Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.746765 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c33a25d-52b3-4d43-8998-3668344008a3-dns-svc\") pod \"3c33a25d-52b3-4d43-8998-3668344008a3\" (UID: \"3c33a25d-52b3-4d43-8998-3668344008a3\") " Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.746817 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcz9b\" (UniqueName: \"kubernetes.io/projected/3c33a25d-52b3-4d43-8998-3668344008a3-kube-api-access-gcz9b\") pod \"3c33a25d-52b3-4d43-8998-3668344008a3\" (UID: \"3c33a25d-52b3-4d43-8998-3668344008a3\") " Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.753050 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c33a25d-52b3-4d43-8998-3668344008a3-kube-api-access-gcz9b" (OuterVolumeSpecName: "kube-api-access-gcz9b") pod "3c33a25d-52b3-4d43-8998-3668344008a3" (UID: "3c33a25d-52b3-4d43-8998-3668344008a3"). InnerVolumeSpecName "kube-api-access-gcz9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.770465 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c33a25d-52b3-4d43-8998-3668344008a3-config" (OuterVolumeSpecName: "config") pod "3c33a25d-52b3-4d43-8998-3668344008a3" (UID: "3c33a25d-52b3-4d43-8998-3668344008a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.772009 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c33a25d-52b3-4d43-8998-3668344008a3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3c33a25d-52b3-4d43-8998-3668344008a3" (UID: "3c33a25d-52b3-4d43-8998-3668344008a3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.775340 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c33a25d-52b3-4d43-8998-3668344008a3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3c33a25d-52b3-4d43-8998-3668344008a3" (UID: "3c33a25d-52b3-4d43-8998-3668344008a3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.848845 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcz9b\" (UniqueName: \"kubernetes.io/projected/3c33a25d-52b3-4d43-8998-3668344008a3-kube-api-access-gcz9b\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.849092 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c33a25d-52b3-4d43-8998-3668344008a3-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.849166 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c33a25d-52b3-4d43-8998-3668344008a3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:49 crc kubenswrapper[4828]: I1205 19:22:49.849232 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c33a25d-52b3-4d43-8998-3668344008a3-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:50 crc kubenswrapper[4828]: I1205 19:22:50.291022 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rd9jv" event={"ID":"b0969276-79d6-4176-9211-af61074920b1","Type":"ContainerStarted","Data":"2d0bbe00eb47faeb7479bc21ef0dd400b841459a708728be51a22df698c26d21"} Dec 05 19:22:50 crc kubenswrapper[4828]: I1205 19:22:50.291389 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-rd9jv" Dec 05 19:22:50 crc kubenswrapper[4828]: I1205 19:22:50.293082 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-kz6bj" event={"ID":"3c33a25d-52b3-4d43-8998-3668344008a3","Type":"ContainerDied","Data":"f7ad0e10af4b7d7501f929165e06d8773234d8afa01c9915c213686b392f1e08"} Dec 05 19:22:50 crc kubenswrapper[4828]: I1205 19:22:50.293137 4828 scope.go:117] "RemoveContainer" containerID="76a7a307aed024fc8bd70f162b1047e89b8ed4d32a8897a7e8faa19f0801453f" Dec 05 19:22:50 crc kubenswrapper[4828]: I1205 19:22:50.293134 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-kz6bj" Dec 05 19:22:50 crc kubenswrapper[4828]: I1205 19:22:50.304325 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-gqdhj" event={"ID":"4ba9cffc-5e2b-44e9-966a-833ab0de45eb","Type":"ContainerStarted","Data":"dd450705720bed6ec92992d6e2f56b6523195efb8e4855f7fcf7486ddf7f8e1e"} Dec 05 19:22:50 crc kubenswrapper[4828]: I1205 19:22:50.323366 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-rd9jv" podStartSLOduration=3.323338649 podStartE2EDuration="3.323338649s" podCreationTimestamp="2025-12-05 19:22:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:22:50.317370846 +0000 UTC m=+1148.212593182" watchObservedRunningTime="2025-12-05 19:22:50.323338649 +0000 UTC m=+1148.218560955" Dec 05 19:22:50 crc kubenswrapper[4828]: I1205 19:22:50.361318 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-gqdhj" podStartSLOduration=3.833996919 podStartE2EDuration="4.361298929s" podCreationTimestamp="2025-12-05 19:22:46 +0000 UTC" firstStartedPulling="2025-12-05 19:22:48.12140682 +0000 UTC m=+1146.016629126" lastFinishedPulling="2025-12-05 19:22:48.64870883 +0000 UTC m=+1146.543931136" observedRunningTime="2025-12-05 19:22:50.355208513 +0000 UTC m=+1148.250430819" watchObservedRunningTime="2025-12-05 19:22:50.361298929 +0000 UTC m=+1148.256521245" Dec 05 19:22:50 crc kubenswrapper[4828]: I1205 19:22:50.428357 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-kz6bj"] Dec 05 19:22:50 crc kubenswrapper[4828]: I1205 19:22:50.437410 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-kz6bj"] Dec 05 19:22:50 crc kubenswrapper[4828]: I1205 19:22:50.473064 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c33a25d-52b3-4d43-8998-3668344008a3" path="/var/lib/kubelet/pods/3c33a25d-52b3-4d43-8998-3668344008a3/volumes" Dec 05 19:22:50 crc kubenswrapper[4828]: I1205 19:22:50.900376 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5837-account-create-update-84sg4" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.071078 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27482011-da42-44e5-85ba-bd369aefc5b6-operator-scripts\") pod \"27482011-da42-44e5-85ba-bd369aefc5b6\" (UID: \"27482011-da42-44e5-85ba-bd369aefc5b6\") " Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.071143 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5p5l\" (UniqueName: \"kubernetes.io/projected/27482011-da42-44e5-85ba-bd369aefc5b6-kube-api-access-z5p5l\") pod \"27482011-da42-44e5-85ba-bd369aefc5b6\" (UID: \"27482011-da42-44e5-85ba-bd369aefc5b6\") " Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.071553 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27482011-da42-44e5-85ba-bd369aefc5b6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "27482011-da42-44e5-85ba-bd369aefc5b6" (UID: "27482011-da42-44e5-85ba-bd369aefc5b6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.076564 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27482011-da42-44e5-85ba-bd369aefc5b6-kube-api-access-z5p5l" (OuterVolumeSpecName: "kube-api-access-z5p5l") pod "27482011-da42-44e5-85ba-bd369aefc5b6" (UID: "27482011-da42-44e5-85ba-bd369aefc5b6"). InnerVolumeSpecName "kube-api-access-z5p5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.154320 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qdhdn" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.160657 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-f8hmz" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.169955 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3d05-account-create-update-9xpln" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.173651 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27482011-da42-44e5-85ba-bd369aefc5b6-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.173703 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5p5l\" (UniqueName: \"kubernetes.io/projected/27482011-da42-44e5-85ba-bd369aefc5b6-kube-api-access-z5p5l\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.178912 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f21a-account-create-update-nq5bw" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.197996 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mbh7g" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.274352 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2p8p\" (UniqueName: \"kubernetes.io/projected/34cf8c2c-f21e-4a27-a777-52b69bc7164b-kube-api-access-j2p8p\") pod \"34cf8c2c-f21e-4a27-a777-52b69bc7164b\" (UID: \"34cf8c2c-f21e-4a27-a777-52b69bc7164b\") " Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.274728 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vglvb\" (UniqueName: \"kubernetes.io/projected/f004af24-6047-4eea-a073-dde452ac983f-kube-api-access-vglvb\") pod \"f004af24-6047-4eea-a073-dde452ac983f\" (UID: \"f004af24-6047-4eea-a073-dde452ac983f\") " Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.274761 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t82z6\" (UniqueName: \"kubernetes.io/projected/07e307bc-dae3-47b6-8864-0835bcf5844d-kube-api-access-t82z6\") pod \"07e307bc-dae3-47b6-8864-0835bcf5844d\" (UID: \"07e307bc-dae3-47b6-8864-0835bcf5844d\") " Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.274877 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07e307bc-dae3-47b6-8864-0835bcf5844d-operator-scripts\") pod \"07e307bc-dae3-47b6-8864-0835bcf5844d\" (UID: \"07e307bc-dae3-47b6-8864-0835bcf5844d\") " Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.274925 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34cf8c2c-f21e-4a27-a777-52b69bc7164b-operator-scripts\") pod \"34cf8c2c-f21e-4a27-a777-52b69bc7164b\" (UID: \"34cf8c2c-f21e-4a27-a777-52b69bc7164b\") " Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.274964 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f004af24-6047-4eea-a073-dde452ac983f-operator-scripts\") pod \"f004af24-6047-4eea-a073-dde452ac983f\" (UID: \"f004af24-6047-4eea-a073-dde452ac983f\") " Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.275529 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f004af24-6047-4eea-a073-dde452ac983f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f004af24-6047-4eea-a073-dde452ac983f" (UID: "f004af24-6047-4eea-a073-dde452ac983f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.275598 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07e307bc-dae3-47b6-8864-0835bcf5844d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "07e307bc-dae3-47b6-8864-0835bcf5844d" (UID: "07e307bc-dae3-47b6-8864-0835bcf5844d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.275690 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34cf8c2c-f21e-4a27-a777-52b69bc7164b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "34cf8c2c-f21e-4a27-a777-52b69bc7164b" (UID: "34cf8c2c-f21e-4a27-a777-52b69bc7164b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.278107 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07e307bc-dae3-47b6-8864-0835bcf5844d-kube-api-access-t82z6" (OuterVolumeSpecName: "kube-api-access-t82z6") pod "07e307bc-dae3-47b6-8864-0835bcf5844d" (UID: "07e307bc-dae3-47b6-8864-0835bcf5844d"). InnerVolumeSpecName "kube-api-access-t82z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.278153 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f004af24-6047-4eea-a073-dde452ac983f-kube-api-access-vglvb" (OuterVolumeSpecName: "kube-api-access-vglvb") pod "f004af24-6047-4eea-a073-dde452ac983f" (UID: "f004af24-6047-4eea-a073-dde452ac983f"). InnerVolumeSpecName "kube-api-access-vglvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.278641 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34cf8c2c-f21e-4a27-a777-52b69bc7164b-kube-api-access-j2p8p" (OuterVolumeSpecName: "kube-api-access-j2p8p") pod "34cf8c2c-f21e-4a27-a777-52b69bc7164b" (UID: "34cf8c2c-f21e-4a27-a777-52b69bc7164b"). InnerVolumeSpecName "kube-api-access-j2p8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.312545 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f21a-account-create-update-nq5bw" event={"ID":"5fb3621e-b696-4551-a40a-ed30e961d2dc","Type":"ContainerDied","Data":"c1926b1cf4bb5c8be5f38039fe686a90a279bd52c34235e08fc3cff2248e9c51"} Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.312592 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1926b1cf4bb5c8be5f38039fe686a90a279bd52c34235e08fc3cff2248e9c51" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.312562 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f21a-account-create-update-nq5bw" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.314206 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qdhdn" event={"ID":"07e307bc-dae3-47b6-8864-0835bcf5844d","Type":"ContainerDied","Data":"39ecab0f26139df852878cb873561f00192cf71ee372a8882ef598928f050af6"} Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.314247 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39ecab0f26139df852878cb873561f00192cf71ee372a8882ef598928f050af6" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.314246 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qdhdn" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.315861 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3d05-account-create-update-9xpln" event={"ID":"34cf8c2c-f21e-4a27-a777-52b69bc7164b","Type":"ContainerDied","Data":"c9b0ce15153af9eb26686b93864b04105ce7db92b9299f1cfa1fd96d2a03539a"} Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.315968 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9b0ce15153af9eb26686b93864b04105ce7db92b9299f1cfa1fd96d2a03539a" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.315874 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3d05-account-create-update-9xpln" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.317463 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5837-account-create-update-84sg4" event={"ID":"27482011-da42-44e5-85ba-bd369aefc5b6","Type":"ContainerDied","Data":"a7dae7253702aef7e8e5373dd42cd6b5243dbbc3349c6b7e12c63299e39bfa41"} Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.317504 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7dae7253702aef7e8e5373dd42cd6b5243dbbc3349c6b7e12c63299e39bfa41" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.317884 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5837-account-create-update-84sg4" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.318982 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-f8hmz" event={"ID":"f004af24-6047-4eea-a073-dde452ac983f","Type":"ContainerDied","Data":"820a041d32a3ed8d67e6cdd20721207add039ba12349efe8e6d09a215eb6abeb"} Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.319029 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="820a041d32a3ed8d67e6cdd20721207add039ba12349efe8e6d09a215eb6abeb" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.319148 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-f8hmz" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.320432 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mbh7g" event={"ID":"81c5894e-e6ac-4192-a24a-b7c8375c47e8","Type":"ContainerDied","Data":"b5dadc446a101dda9bf2a90295afb8ca7143d56ec224508fb485c6d02054eb87"} Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.320460 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5dadc446a101dda9bf2a90295afb8ca7143d56ec224508fb485c6d02054eb87" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.320437 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mbh7g" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.322861 4828 generic.go:334] "Generic (PLEG): container finished" podID="784df3ad-b111-476d-ad5c-e10ee3e04b2f" containerID="31b9f674a8a336bfcb204bf8e498f41081bf3e675223182aab1ec6825d004989" exitCode=0 Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.322919 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vs6cm" event={"ID":"784df3ad-b111-476d-ad5c-e10ee3e04b2f","Type":"ContainerDied","Data":"31b9f674a8a336bfcb204bf8e498f41081bf3e675223182aab1ec6825d004989"} Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.376177 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhbdf\" (UniqueName: \"kubernetes.io/projected/5fb3621e-b696-4551-a40a-ed30e961d2dc-kube-api-access-zhbdf\") pod \"5fb3621e-b696-4551-a40a-ed30e961d2dc\" (UID: \"5fb3621e-b696-4551-a40a-ed30e961d2dc\") " Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.376332 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fb3621e-b696-4551-a40a-ed30e961d2dc-operator-scripts\") pod \"5fb3621e-b696-4551-a40a-ed30e961d2dc\" (UID: \"5fb3621e-b696-4551-a40a-ed30e961d2dc\") " Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.376437 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81c5894e-e6ac-4192-a24a-b7c8375c47e8-operator-scripts\") pod \"81c5894e-e6ac-4192-a24a-b7c8375c47e8\" (UID: \"81c5894e-e6ac-4192-a24a-b7c8375c47e8\") " Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.376470 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq72w\" (UniqueName: \"kubernetes.io/projected/81c5894e-e6ac-4192-a24a-b7c8375c47e8-kube-api-access-dq72w\") pod \"81c5894e-e6ac-4192-a24a-b7c8375c47e8\" (UID: \"81c5894e-e6ac-4192-a24a-b7c8375c47e8\") " Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.376924 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t82z6\" (UniqueName: \"kubernetes.io/projected/07e307bc-dae3-47b6-8864-0835bcf5844d-kube-api-access-t82z6\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.376930 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81c5894e-e6ac-4192-a24a-b7c8375c47e8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81c5894e-e6ac-4192-a24a-b7c8375c47e8" (UID: "81c5894e-e6ac-4192-a24a-b7c8375c47e8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.376935 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fb3621e-b696-4551-a40a-ed30e961d2dc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5fb3621e-b696-4551-a40a-ed30e961d2dc" (UID: "5fb3621e-b696-4551-a40a-ed30e961d2dc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.377322 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07e307bc-dae3-47b6-8864-0835bcf5844d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.377347 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34cf8c2c-f21e-4a27-a777-52b69bc7164b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.377362 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f004af24-6047-4eea-a073-dde452ac983f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.377374 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2p8p\" (UniqueName: \"kubernetes.io/projected/34cf8c2c-f21e-4a27-a777-52b69bc7164b-kube-api-access-j2p8p\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.377386 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vglvb\" (UniqueName: \"kubernetes.io/projected/f004af24-6047-4eea-a073-dde452ac983f-kube-api-access-vglvb\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.379251 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fb3621e-b696-4551-a40a-ed30e961d2dc-kube-api-access-zhbdf" (OuterVolumeSpecName: "kube-api-access-zhbdf") pod "5fb3621e-b696-4551-a40a-ed30e961d2dc" (UID: "5fb3621e-b696-4551-a40a-ed30e961d2dc"). InnerVolumeSpecName "kube-api-access-zhbdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.380260 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81c5894e-e6ac-4192-a24a-b7c8375c47e8-kube-api-access-dq72w" (OuterVolumeSpecName: "kube-api-access-dq72w") pod "81c5894e-e6ac-4192-a24a-b7c8375c47e8" (UID: "81c5894e-e6ac-4192-a24a-b7c8375c47e8"). InnerVolumeSpecName "kube-api-access-dq72w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.479339 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81c5894e-e6ac-4192-a24a-b7c8375c47e8-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.479377 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq72w\" (UniqueName: \"kubernetes.io/projected/81c5894e-e6ac-4192-a24a-b7c8375c47e8-kube-api-access-dq72w\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.479391 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhbdf\" (UniqueName: \"kubernetes.io/projected/5fb3621e-b696-4551-a40a-ed30e961d2dc-kube-api-access-zhbdf\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:51 crc kubenswrapper[4828]: I1205 19:22:51.479402 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fb3621e-b696-4551-a40a-ed30e961d2dc-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:52 crc kubenswrapper[4828]: I1205 19:22:52.647676 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vs6cm" Dec 05 19:22:52 crc kubenswrapper[4828]: I1205 19:22:52.799900 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lljfz\" (UniqueName: \"kubernetes.io/projected/784df3ad-b111-476d-ad5c-e10ee3e04b2f-kube-api-access-lljfz\") pod \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\" (UID: \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\") " Dec 05 19:22:52 crc kubenswrapper[4828]: I1205 19:22:52.799985 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/784df3ad-b111-476d-ad5c-e10ee3e04b2f-ring-data-devices\") pod \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\" (UID: \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\") " Dec 05 19:22:52 crc kubenswrapper[4828]: I1205 19:22:52.800049 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784df3ad-b111-476d-ad5c-e10ee3e04b2f-combined-ca-bundle\") pod \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\" (UID: \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\") " Dec 05 19:22:52 crc kubenswrapper[4828]: I1205 19:22:52.800076 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/784df3ad-b111-476d-ad5c-e10ee3e04b2f-swiftconf\") pod \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\" (UID: \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\") " Dec 05 19:22:52 crc kubenswrapper[4828]: I1205 19:22:52.800103 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/784df3ad-b111-476d-ad5c-e10ee3e04b2f-dispersionconf\") pod \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\" (UID: \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\") " Dec 05 19:22:52 crc kubenswrapper[4828]: I1205 19:22:52.800185 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/784df3ad-b111-476d-ad5c-e10ee3e04b2f-scripts\") pod \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\" (UID: \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\") " Dec 05 19:22:52 crc kubenswrapper[4828]: I1205 19:22:52.800210 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/784df3ad-b111-476d-ad5c-e10ee3e04b2f-etc-swift\") pod \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\" (UID: \"784df3ad-b111-476d-ad5c-e10ee3e04b2f\") " Dec 05 19:22:52 crc kubenswrapper[4828]: I1205 19:22:52.801459 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/784df3ad-b111-476d-ad5c-e10ee3e04b2f-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "784df3ad-b111-476d-ad5c-e10ee3e04b2f" (UID: "784df3ad-b111-476d-ad5c-e10ee3e04b2f"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:22:52 crc kubenswrapper[4828]: I1205 19:22:52.802527 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/784df3ad-b111-476d-ad5c-e10ee3e04b2f-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "784df3ad-b111-476d-ad5c-e10ee3e04b2f" (UID: "784df3ad-b111-476d-ad5c-e10ee3e04b2f"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:22:52 crc kubenswrapper[4828]: I1205 19:22:52.807780 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/784df3ad-b111-476d-ad5c-e10ee3e04b2f-kube-api-access-lljfz" (OuterVolumeSpecName: "kube-api-access-lljfz") pod "784df3ad-b111-476d-ad5c-e10ee3e04b2f" (UID: "784df3ad-b111-476d-ad5c-e10ee3e04b2f"). InnerVolumeSpecName "kube-api-access-lljfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:22:52 crc kubenswrapper[4828]: I1205 19:22:52.808135 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/784df3ad-b111-476d-ad5c-e10ee3e04b2f-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "784df3ad-b111-476d-ad5c-e10ee3e04b2f" (UID: "784df3ad-b111-476d-ad5c-e10ee3e04b2f"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:22:52 crc kubenswrapper[4828]: I1205 19:22:52.833524 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/784df3ad-b111-476d-ad5c-e10ee3e04b2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "784df3ad-b111-476d-ad5c-e10ee3e04b2f" (UID: "784df3ad-b111-476d-ad5c-e10ee3e04b2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:22:52 crc kubenswrapper[4828]: I1205 19:22:52.833777 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/784df3ad-b111-476d-ad5c-e10ee3e04b2f-scripts" (OuterVolumeSpecName: "scripts") pod "784df3ad-b111-476d-ad5c-e10ee3e04b2f" (UID: "784df3ad-b111-476d-ad5c-e10ee3e04b2f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:22:52 crc kubenswrapper[4828]: I1205 19:22:52.836517 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/784df3ad-b111-476d-ad5c-e10ee3e04b2f-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "784df3ad-b111-476d-ad5c-e10ee3e04b2f" (UID: "784df3ad-b111-476d-ad5c-e10ee3e04b2f"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:22:52 crc kubenswrapper[4828]: I1205 19:22:52.901991 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lljfz\" (UniqueName: \"kubernetes.io/projected/784df3ad-b111-476d-ad5c-e10ee3e04b2f-kube-api-access-lljfz\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:52 crc kubenswrapper[4828]: I1205 19:22:52.902027 4828 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/784df3ad-b111-476d-ad5c-e10ee3e04b2f-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:52 crc kubenswrapper[4828]: I1205 19:22:52.902039 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784df3ad-b111-476d-ad5c-e10ee3e04b2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:52 crc kubenswrapper[4828]: I1205 19:22:52.902052 4828 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/784df3ad-b111-476d-ad5c-e10ee3e04b2f-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:52 crc kubenswrapper[4828]: I1205 19:22:52.902063 4828 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/784df3ad-b111-476d-ad5c-e10ee3e04b2f-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:52 crc kubenswrapper[4828]: I1205 19:22:52.902076 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/784df3ad-b111-476d-ad5c-e10ee3e04b2f-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:52 crc kubenswrapper[4828]: I1205 19:22:52.902084 4828 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/784df3ad-b111-476d-ad5c-e10ee3e04b2f-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:53 crc kubenswrapper[4828]: I1205 19:22:53.106944 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-s6jdb" podUID="f88a4161-1271-4374-9740-eaea879d6561" containerName="ovn-controller" probeResult="failure" output=< Dec 05 19:22:53 crc kubenswrapper[4828]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 05 19:22:53 crc kubenswrapper[4828]: > Dec 05 19:22:53 crc kubenswrapper[4828]: I1205 19:22:53.339605 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"31b675bd-ec74-4876-91a0-95e4180e8cab","Type":"ContainerStarted","Data":"091c651df9f3c2a76c746c9325adea6399f3b9aec8792ea70bf5be03795d1fb3"} Dec 05 19:22:53 crc kubenswrapper[4828]: I1205 19:22:53.341197 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vs6cm" event={"ID":"784df3ad-b111-476d-ad5c-e10ee3e04b2f","Type":"ContainerDied","Data":"b0faf1022dada597de0feb6e3f5edadbae6e2dce147a7d937798cb151d57fa9d"} Dec 05 19:22:53 crc kubenswrapper[4828]: I1205 19:22:53.341220 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0faf1022dada597de0feb6e3f5edadbae6e2dce147a7d937798cb151d57fa9d" Dec 05 19:22:53 crc kubenswrapper[4828]: I1205 19:22:53.341287 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vs6cm" Dec 05 19:22:53 crc kubenswrapper[4828]: I1205 19:22:53.363571 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=40.700863121 podStartE2EDuration="48.363550038s" podCreationTimestamp="2025-12-05 19:22:05 +0000 UTC" firstStartedPulling="2025-12-05 19:22:13.713752657 +0000 UTC m=+1111.608974963" lastFinishedPulling="2025-12-05 19:22:21.376439584 +0000 UTC m=+1119.271661880" observedRunningTime="2025-12-05 19:22:53.359456986 +0000 UTC m=+1151.254679292" watchObservedRunningTime="2025-12-05 19:22:53.363550038 +0000 UTC m=+1151.258772344" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.373923 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7ac00d92-7825-4462-ab12-8d2059085d24","Type":"ContainerStarted","Data":"cc474a19c67257e91901eaf464e6e8a810577a18af7a91a11ab87e63b6e42df6"} Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.401009 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=46.701914785 podStartE2EDuration="54.400987579s" podCreationTimestamp="2025-12-05 19:22:01 +0000 UTC" firstStartedPulling="2025-12-05 19:22:13.702950171 +0000 UTC m=+1111.598172477" lastFinishedPulling="2025-12-05 19:22:21.402022935 +0000 UTC m=+1119.297245271" observedRunningTime="2025-12-05 19:22:55.391641873 +0000 UTC m=+1153.286864189" watchObservedRunningTime="2025-12-05 19:22:55.400987579 +0000 UTC m=+1153.296209875" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.526766 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 05 19:22:55 crc kubenswrapper[4828]: E1205 19:22:55.527186 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fb3621e-b696-4551-a40a-ed30e961d2dc" containerName="mariadb-account-create-update" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.527203 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fb3621e-b696-4551-a40a-ed30e961d2dc" containerName="mariadb-account-create-update" Dec 05 19:22:55 crc kubenswrapper[4828]: E1205 19:22:55.527214 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c33a25d-52b3-4d43-8998-3668344008a3" containerName="init" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.527220 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c33a25d-52b3-4d43-8998-3668344008a3" containerName="init" Dec 05 19:22:55 crc kubenswrapper[4828]: E1205 19:22:55.527234 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="784df3ad-b111-476d-ad5c-e10ee3e04b2f" containerName="swift-ring-rebalance" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.527241 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="784df3ad-b111-476d-ad5c-e10ee3e04b2f" containerName="swift-ring-rebalance" Dec 05 19:22:55 crc kubenswrapper[4828]: E1205 19:22:55.527252 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f004af24-6047-4eea-a073-dde452ac983f" containerName="mariadb-database-create" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.527258 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f004af24-6047-4eea-a073-dde452ac983f" containerName="mariadb-database-create" Dec 05 19:22:55 crc kubenswrapper[4828]: E1205 19:22:55.527270 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27482011-da42-44e5-85ba-bd369aefc5b6" containerName="mariadb-account-create-update" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.527276 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="27482011-da42-44e5-85ba-bd369aefc5b6" containerName="mariadb-account-create-update" Dec 05 19:22:55 crc kubenswrapper[4828]: E1205 19:22:55.527282 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34cf8c2c-f21e-4a27-a777-52b69bc7164b" containerName="mariadb-account-create-update" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.527288 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="34cf8c2c-f21e-4a27-a777-52b69bc7164b" containerName="mariadb-account-create-update" Dec 05 19:22:55 crc kubenswrapper[4828]: E1205 19:22:55.527300 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81c5894e-e6ac-4192-a24a-b7c8375c47e8" containerName="mariadb-database-create" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.527308 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="81c5894e-e6ac-4192-a24a-b7c8375c47e8" containerName="mariadb-database-create" Dec 05 19:22:55 crc kubenswrapper[4828]: E1205 19:22:55.527317 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e307bc-dae3-47b6-8864-0835bcf5844d" containerName="mariadb-database-create" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.527323 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e307bc-dae3-47b6-8864-0835bcf5844d" containerName="mariadb-database-create" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.527476 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="34cf8c2c-f21e-4a27-a777-52b69bc7164b" containerName="mariadb-account-create-update" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.527490 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c33a25d-52b3-4d43-8998-3668344008a3" containerName="init" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.527499 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="f004af24-6047-4eea-a073-dde452ac983f" containerName="mariadb-database-create" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.527506 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="07e307bc-dae3-47b6-8864-0835bcf5844d" containerName="mariadb-database-create" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.527514 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="27482011-da42-44e5-85ba-bd369aefc5b6" containerName="mariadb-account-create-update" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.527532 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="784df3ad-b111-476d-ad5c-e10ee3e04b2f" containerName="swift-ring-rebalance" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.527543 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="81c5894e-e6ac-4192-a24a-b7c8375c47e8" containerName="mariadb-database-create" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.527550 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fb3621e-b696-4551-a40a-ed30e961d2dc" containerName="mariadb-account-create-update" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.528389 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.531265 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.531360 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.531661 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-b88nn" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.533815 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.535217 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.646265 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc0e095c-680f-45ec-96b2-3713515bc9c3-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"bc0e095c-680f-45ec-96b2-3713515bc9c3\") " pod="openstack/ovn-northd-0" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.646350 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc0e095c-680f-45ec-96b2-3713515bc9c3-scripts\") pod \"ovn-northd-0\" (UID: \"bc0e095c-680f-45ec-96b2-3713515bc9c3\") " pod="openstack/ovn-northd-0" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.646388 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw2hj\" (UniqueName: \"kubernetes.io/projected/bc0e095c-680f-45ec-96b2-3713515bc9c3-kube-api-access-cw2hj\") pod \"ovn-northd-0\" (UID: \"bc0e095c-680f-45ec-96b2-3713515bc9c3\") " pod="openstack/ovn-northd-0" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.646653 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc0e095c-680f-45ec-96b2-3713515bc9c3-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"bc0e095c-680f-45ec-96b2-3713515bc9c3\") " pod="openstack/ovn-northd-0" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.646935 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc0e095c-680f-45ec-96b2-3713515bc9c3-config\") pod \"ovn-northd-0\" (UID: \"bc0e095c-680f-45ec-96b2-3713515bc9c3\") " pod="openstack/ovn-northd-0" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.646995 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bc0e095c-680f-45ec-96b2-3713515bc9c3-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"bc0e095c-680f-45ec-96b2-3713515bc9c3\") " pod="openstack/ovn-northd-0" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.647194 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc0e095c-680f-45ec-96b2-3713515bc9c3-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"bc0e095c-680f-45ec-96b2-3713515bc9c3\") " pod="openstack/ovn-northd-0" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.748425 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc0e095c-680f-45ec-96b2-3713515bc9c3-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"bc0e095c-680f-45ec-96b2-3713515bc9c3\") " pod="openstack/ovn-northd-0" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.748498 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc0e095c-680f-45ec-96b2-3713515bc9c3-scripts\") pod \"ovn-northd-0\" (UID: \"bc0e095c-680f-45ec-96b2-3713515bc9c3\") " pod="openstack/ovn-northd-0" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.748518 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw2hj\" (UniqueName: \"kubernetes.io/projected/bc0e095c-680f-45ec-96b2-3713515bc9c3-kube-api-access-cw2hj\") pod \"ovn-northd-0\" (UID: \"bc0e095c-680f-45ec-96b2-3713515bc9c3\") " pod="openstack/ovn-northd-0" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.748561 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc0e095c-680f-45ec-96b2-3713515bc9c3-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"bc0e095c-680f-45ec-96b2-3713515bc9c3\") " pod="openstack/ovn-northd-0" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.748607 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc0e095c-680f-45ec-96b2-3713515bc9c3-config\") pod \"ovn-northd-0\" (UID: \"bc0e095c-680f-45ec-96b2-3713515bc9c3\") " pod="openstack/ovn-northd-0" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.748626 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bc0e095c-680f-45ec-96b2-3713515bc9c3-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"bc0e095c-680f-45ec-96b2-3713515bc9c3\") " pod="openstack/ovn-northd-0" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.748660 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc0e095c-680f-45ec-96b2-3713515bc9c3-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"bc0e095c-680f-45ec-96b2-3713515bc9c3\") " pod="openstack/ovn-northd-0" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.749313 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bc0e095c-680f-45ec-96b2-3713515bc9c3-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"bc0e095c-680f-45ec-96b2-3713515bc9c3\") " pod="openstack/ovn-northd-0" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.749536 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc0e095c-680f-45ec-96b2-3713515bc9c3-config\") pod \"ovn-northd-0\" (UID: \"bc0e095c-680f-45ec-96b2-3713515bc9c3\") " pod="openstack/ovn-northd-0" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.750122 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc0e095c-680f-45ec-96b2-3713515bc9c3-scripts\") pod \"ovn-northd-0\" (UID: \"bc0e095c-680f-45ec-96b2-3713515bc9c3\") " pod="openstack/ovn-northd-0" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.753745 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc0e095c-680f-45ec-96b2-3713515bc9c3-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"bc0e095c-680f-45ec-96b2-3713515bc9c3\") " pod="openstack/ovn-northd-0" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.753917 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc0e095c-680f-45ec-96b2-3713515bc9c3-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"bc0e095c-680f-45ec-96b2-3713515bc9c3\") " pod="openstack/ovn-northd-0" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.755082 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc0e095c-680f-45ec-96b2-3713515bc9c3-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"bc0e095c-680f-45ec-96b2-3713515bc9c3\") " pod="openstack/ovn-northd-0" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.780635 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw2hj\" (UniqueName: \"kubernetes.io/projected/bc0e095c-680f-45ec-96b2-3713515bc9c3-kube-api-access-cw2hj\") pod \"ovn-northd-0\" (UID: \"bc0e095c-680f-45ec-96b2-3713515bc9c3\") " pod="openstack/ovn-northd-0" Dec 05 19:22:55 crc kubenswrapper[4828]: I1205 19:22:55.846172 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 05 19:22:56 crc kubenswrapper[4828]: I1205 19:22:56.304640 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 05 19:22:56 crc kubenswrapper[4828]: I1205 19:22:56.381458 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"bc0e095c-680f-45ec-96b2-3713515bc9c3","Type":"ContainerStarted","Data":"698f1fa4bcd7890d6519b3349707174b37fbc17a23ff6f10bd0412d43bcc0289"} Dec 05 19:22:57 crc kubenswrapper[4828]: I1205 19:22:57.226669 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-m8vpv"] Dec 05 19:22:57 crc kubenswrapper[4828]: I1205 19:22:57.228084 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-m8vpv" Dec 05 19:22:57 crc kubenswrapper[4828]: I1205 19:22:57.230218 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 05 19:22:57 crc kubenswrapper[4828]: I1205 19:22:57.230608 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-bp8rt" Dec 05 19:22:57 crc kubenswrapper[4828]: I1205 19:22:57.240422 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-m8vpv"] Dec 05 19:22:57 crc kubenswrapper[4828]: I1205 19:22:57.378388 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5d1d7e-96cd-493a-84e4-1a605338a206-combined-ca-bundle\") pod \"glance-db-sync-m8vpv\" (UID: \"8f5d1d7e-96cd-493a-84e4-1a605338a206\") " pod="openstack/glance-db-sync-m8vpv" Dec 05 19:22:57 crc kubenswrapper[4828]: I1205 19:22:57.378470 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f5d1d7e-96cd-493a-84e4-1a605338a206-config-data\") pod \"glance-db-sync-m8vpv\" (UID: \"8f5d1d7e-96cd-493a-84e4-1a605338a206\") " pod="openstack/glance-db-sync-m8vpv" Dec 05 19:22:57 crc kubenswrapper[4828]: I1205 19:22:57.378520 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrtnq\" (UniqueName: \"kubernetes.io/projected/8f5d1d7e-96cd-493a-84e4-1a605338a206-kube-api-access-xrtnq\") pod \"glance-db-sync-m8vpv\" (UID: \"8f5d1d7e-96cd-493a-84e4-1a605338a206\") " pod="openstack/glance-db-sync-m8vpv" Dec 05 19:22:57 crc kubenswrapper[4828]: I1205 19:22:57.378558 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8f5d1d7e-96cd-493a-84e4-1a605338a206-db-sync-config-data\") pod \"glance-db-sync-m8vpv\" (UID: \"8f5d1d7e-96cd-493a-84e4-1a605338a206\") " pod="openstack/glance-db-sync-m8vpv" Dec 05 19:22:57 crc kubenswrapper[4828]: I1205 19:22:57.481881 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8f5d1d7e-96cd-493a-84e4-1a605338a206-db-sync-config-data\") pod \"glance-db-sync-m8vpv\" (UID: \"8f5d1d7e-96cd-493a-84e4-1a605338a206\") " pod="openstack/glance-db-sync-m8vpv" Dec 05 19:22:57 crc kubenswrapper[4828]: I1205 19:22:57.482049 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5d1d7e-96cd-493a-84e4-1a605338a206-combined-ca-bundle\") pod \"glance-db-sync-m8vpv\" (UID: \"8f5d1d7e-96cd-493a-84e4-1a605338a206\") " pod="openstack/glance-db-sync-m8vpv" Dec 05 19:22:57 crc kubenswrapper[4828]: I1205 19:22:57.482113 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f5d1d7e-96cd-493a-84e4-1a605338a206-config-data\") pod \"glance-db-sync-m8vpv\" (UID: \"8f5d1d7e-96cd-493a-84e4-1a605338a206\") " pod="openstack/glance-db-sync-m8vpv" Dec 05 19:22:57 crc kubenswrapper[4828]: I1205 19:22:57.483481 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrtnq\" (UniqueName: \"kubernetes.io/projected/8f5d1d7e-96cd-493a-84e4-1a605338a206-kube-api-access-xrtnq\") pod \"glance-db-sync-m8vpv\" (UID: \"8f5d1d7e-96cd-493a-84e4-1a605338a206\") " pod="openstack/glance-db-sync-m8vpv" Dec 05 19:22:57 crc kubenswrapper[4828]: I1205 19:22:57.488914 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f5d1d7e-96cd-493a-84e4-1a605338a206-config-data\") pod \"glance-db-sync-m8vpv\" (UID: \"8f5d1d7e-96cd-493a-84e4-1a605338a206\") " pod="openstack/glance-db-sync-m8vpv" Dec 05 19:22:57 crc kubenswrapper[4828]: I1205 19:22:57.495606 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5d1d7e-96cd-493a-84e4-1a605338a206-combined-ca-bundle\") pod \"glance-db-sync-m8vpv\" (UID: \"8f5d1d7e-96cd-493a-84e4-1a605338a206\") " pod="openstack/glance-db-sync-m8vpv" Dec 05 19:22:57 crc kubenswrapper[4828]: I1205 19:22:57.499022 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8f5d1d7e-96cd-493a-84e4-1a605338a206-db-sync-config-data\") pod \"glance-db-sync-m8vpv\" (UID: \"8f5d1d7e-96cd-493a-84e4-1a605338a206\") " pod="openstack/glance-db-sync-m8vpv" Dec 05 19:22:57 crc kubenswrapper[4828]: I1205 19:22:57.501721 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrtnq\" (UniqueName: \"kubernetes.io/projected/8f5d1d7e-96cd-493a-84e4-1a605338a206-kube-api-access-xrtnq\") pod \"glance-db-sync-m8vpv\" (UID: \"8f5d1d7e-96cd-493a-84e4-1a605338a206\") " pod="openstack/glance-db-sync-m8vpv" Dec 05 19:22:57 crc kubenswrapper[4828]: I1205 19:22:57.547180 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-m8vpv" Dec 05 19:22:57 crc kubenswrapper[4828]: I1205 19:22:57.643019 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-rd9jv" Dec 05 19:22:57 crc kubenswrapper[4828]: I1205 19:22:57.693390 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-zcskw"] Dec 05 19:22:57 crc kubenswrapper[4828]: I1205 19:22:57.693648 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-zcskw" podUID="24ef607c-0e94-4e51-9fbe-54745774cd5e" containerName="dnsmasq-dns" containerID="cri-o://866813dcad96d1097a1856cc762d6d4066aaf70e7d992657a70c2d3799454760" gracePeriod=10 Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.107075 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-s6jdb" podUID="f88a4161-1271-4374-9740-eaea879d6561" containerName="ovn-controller" probeResult="failure" output=< Dec 05 19:22:58 crc kubenswrapper[4828]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 05 19:22:58 crc kubenswrapper[4828]: > Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.121732 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-l467t" Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.122923 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-l467t" Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.355063 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-s6jdb-config-q6t66"] Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.356438 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s6jdb-config-q6t66" Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.365538 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.423058 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s6jdb-config-q6t66"] Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.440333 4828 generic.go:334] "Generic (PLEG): container finished" podID="24ef607c-0e94-4e51-9fbe-54745774cd5e" containerID="866813dcad96d1097a1856cc762d6d4066aaf70e7d992657a70c2d3799454760" exitCode=0 Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.440704 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-zcskw" event={"ID":"24ef607c-0e94-4e51-9fbe-54745774cd5e","Type":"ContainerDied","Data":"866813dcad96d1097a1856cc762d6d4066aaf70e7d992657a70c2d3799454760"} Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.526663 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3fa1190b-959c-459f-9be5-715139f3f32c-var-log-ovn\") pod \"ovn-controller-s6jdb-config-q6t66\" (UID: \"3fa1190b-959c-459f-9be5-715139f3f32c\") " pod="openstack/ovn-controller-s6jdb-config-q6t66" Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.526735 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3fa1190b-959c-459f-9be5-715139f3f32c-var-run-ovn\") pod \"ovn-controller-s6jdb-config-q6t66\" (UID: \"3fa1190b-959c-459f-9be5-715139f3f32c\") " pod="openstack/ovn-controller-s6jdb-config-q6t66" Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.526770 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpf5j\" (UniqueName: \"kubernetes.io/projected/3fa1190b-959c-459f-9be5-715139f3f32c-kube-api-access-kpf5j\") pod \"ovn-controller-s6jdb-config-q6t66\" (UID: \"3fa1190b-959c-459f-9be5-715139f3f32c\") " pod="openstack/ovn-controller-s6jdb-config-q6t66" Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.526808 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3fa1190b-959c-459f-9be5-715139f3f32c-additional-scripts\") pod \"ovn-controller-s6jdb-config-q6t66\" (UID: \"3fa1190b-959c-459f-9be5-715139f3f32c\") " pod="openstack/ovn-controller-s6jdb-config-q6t66" Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.526912 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3fa1190b-959c-459f-9be5-715139f3f32c-scripts\") pod \"ovn-controller-s6jdb-config-q6t66\" (UID: \"3fa1190b-959c-459f-9be5-715139f3f32c\") " pod="openstack/ovn-controller-s6jdb-config-q6t66" Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.526965 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3fa1190b-959c-459f-9be5-715139f3f32c-var-run\") pod \"ovn-controller-s6jdb-config-q6t66\" (UID: \"3fa1190b-959c-459f-9be5-715139f3f32c\") " pod="openstack/ovn-controller-s6jdb-config-q6t66" Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.627753 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3fa1190b-959c-459f-9be5-715139f3f32c-var-run\") pod \"ovn-controller-s6jdb-config-q6t66\" (UID: \"3fa1190b-959c-459f-9be5-715139f3f32c\") " pod="openstack/ovn-controller-s6jdb-config-q6t66" Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.627799 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3fa1190b-959c-459f-9be5-715139f3f32c-var-log-ovn\") pod \"ovn-controller-s6jdb-config-q6t66\" (UID: \"3fa1190b-959c-459f-9be5-715139f3f32c\") " pod="openstack/ovn-controller-s6jdb-config-q6t66" Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.627874 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3fa1190b-959c-459f-9be5-715139f3f32c-var-run-ovn\") pod \"ovn-controller-s6jdb-config-q6t66\" (UID: \"3fa1190b-959c-459f-9be5-715139f3f32c\") " pod="openstack/ovn-controller-s6jdb-config-q6t66" Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.627911 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpf5j\" (UniqueName: \"kubernetes.io/projected/3fa1190b-959c-459f-9be5-715139f3f32c-kube-api-access-kpf5j\") pod \"ovn-controller-s6jdb-config-q6t66\" (UID: \"3fa1190b-959c-459f-9be5-715139f3f32c\") " pod="openstack/ovn-controller-s6jdb-config-q6t66" Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.627930 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3fa1190b-959c-459f-9be5-715139f3f32c-additional-scripts\") pod \"ovn-controller-s6jdb-config-q6t66\" (UID: \"3fa1190b-959c-459f-9be5-715139f3f32c\") " pod="openstack/ovn-controller-s6jdb-config-q6t66" Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.627986 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3fa1190b-959c-459f-9be5-715139f3f32c-scripts\") pod \"ovn-controller-s6jdb-config-q6t66\" (UID: \"3fa1190b-959c-459f-9be5-715139f3f32c\") " pod="openstack/ovn-controller-s6jdb-config-q6t66" Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.628370 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3fa1190b-959c-459f-9be5-715139f3f32c-var-run-ovn\") pod \"ovn-controller-s6jdb-config-q6t66\" (UID: \"3fa1190b-959c-459f-9be5-715139f3f32c\") " pod="openstack/ovn-controller-s6jdb-config-q6t66" Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.628405 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3fa1190b-959c-459f-9be5-715139f3f32c-var-run\") pod \"ovn-controller-s6jdb-config-q6t66\" (UID: \"3fa1190b-959c-459f-9be5-715139f3f32c\") " pod="openstack/ovn-controller-s6jdb-config-q6t66" Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.628562 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3fa1190b-959c-459f-9be5-715139f3f32c-var-log-ovn\") pod \"ovn-controller-s6jdb-config-q6t66\" (UID: \"3fa1190b-959c-459f-9be5-715139f3f32c\") " pod="openstack/ovn-controller-s6jdb-config-q6t66" Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.629219 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3fa1190b-959c-459f-9be5-715139f3f32c-additional-scripts\") pod \"ovn-controller-s6jdb-config-q6t66\" (UID: \"3fa1190b-959c-459f-9be5-715139f3f32c\") " pod="openstack/ovn-controller-s6jdb-config-q6t66" Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.632338 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3fa1190b-959c-459f-9be5-715139f3f32c-scripts\") pod \"ovn-controller-s6jdb-config-q6t66\" (UID: \"3fa1190b-959c-459f-9be5-715139f3f32c\") " pod="openstack/ovn-controller-s6jdb-config-q6t66" Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.645546 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpf5j\" (UniqueName: \"kubernetes.io/projected/3fa1190b-959c-459f-9be5-715139f3f32c-kube-api-access-kpf5j\") pod \"ovn-controller-s6jdb-config-q6t66\" (UID: \"3fa1190b-959c-459f-9be5-715139f3f32c\") " pod="openstack/ovn-controller-s6jdb-config-q6t66" Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.710696 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s6jdb-config-q6t66" Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.726777 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-zcskw" Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.840469 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ef607c-0e94-4e51-9fbe-54745774cd5e-config\") pod \"24ef607c-0e94-4e51-9fbe-54745774cd5e\" (UID: \"24ef607c-0e94-4e51-9fbe-54745774cd5e\") " Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.840516 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7fcr\" (UniqueName: \"kubernetes.io/projected/24ef607c-0e94-4e51-9fbe-54745774cd5e-kube-api-access-t7fcr\") pod \"24ef607c-0e94-4e51-9fbe-54745774cd5e\" (UID: \"24ef607c-0e94-4e51-9fbe-54745774cd5e\") " Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.840608 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24ef607c-0e94-4e51-9fbe-54745774cd5e-dns-svc\") pod \"24ef607c-0e94-4e51-9fbe-54745774cd5e\" (UID: \"24ef607c-0e94-4e51-9fbe-54745774cd5e\") " Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.846583 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24ef607c-0e94-4e51-9fbe-54745774cd5e-kube-api-access-t7fcr" (OuterVolumeSpecName: "kube-api-access-t7fcr") pod "24ef607c-0e94-4e51-9fbe-54745774cd5e" (UID: "24ef607c-0e94-4e51-9fbe-54745774cd5e"). InnerVolumeSpecName "kube-api-access-t7fcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.847513 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-m8vpv"] Dec 05 19:22:58 crc kubenswrapper[4828]: W1205 19:22:58.852940 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f5d1d7e_96cd_493a_84e4_1a605338a206.slice/crio-3ed3a7e9a1c8532b4ed8e303814bf722634823ae5587bdc22e609b29fcad4c5f WatchSource:0}: Error finding container 3ed3a7e9a1c8532b4ed8e303814bf722634823ae5587bdc22e609b29fcad4c5f: Status 404 returned error can't find the container with id 3ed3a7e9a1c8532b4ed8e303814bf722634823ae5587bdc22e609b29fcad4c5f Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.887584 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24ef607c-0e94-4e51-9fbe-54745774cd5e-config" (OuterVolumeSpecName: "config") pod "24ef607c-0e94-4e51-9fbe-54745774cd5e" (UID: "24ef607c-0e94-4e51-9fbe-54745774cd5e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.905557 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24ef607c-0e94-4e51-9fbe-54745774cd5e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "24ef607c-0e94-4e51-9fbe-54745774cd5e" (UID: "24ef607c-0e94-4e51-9fbe-54745774cd5e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.942611 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ef607c-0e94-4e51-9fbe-54745774cd5e-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.942660 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7fcr\" (UniqueName: \"kubernetes.io/projected/24ef607c-0e94-4e51-9fbe-54745774cd5e-kube-api-access-t7fcr\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:58 crc kubenswrapper[4828]: I1205 19:22:58.942676 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24ef607c-0e94-4e51-9fbe-54745774cd5e-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 19:22:59 crc kubenswrapper[4828]: I1205 19:22:59.213575 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s6jdb-config-q6t66"] Dec 05 19:22:59 crc kubenswrapper[4828]: W1205 19:22:59.224640 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fa1190b_959c_459f_9be5_715139f3f32c.slice/crio-24fc686e8bb84233a80a07387215848478a2aa7d9aaa722915d385033e2c58e8 WatchSource:0}: Error finding container 24fc686e8bb84233a80a07387215848478a2aa7d9aaa722915d385033e2c58e8: Status 404 returned error can't find the container with id 24fc686e8bb84233a80a07387215848478a2aa7d9aaa722915d385033e2c58e8 Dec 05 19:22:59 crc kubenswrapper[4828]: I1205 19:22:59.451657 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"bc0e095c-680f-45ec-96b2-3713515bc9c3","Type":"ContainerStarted","Data":"e90cadd84d5c40c5fd83766047d573a46ce3ef4b54f9f949a31b408074be992c"} Dec 05 19:22:59 crc kubenswrapper[4828]: I1205 19:22:59.451699 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"bc0e095c-680f-45ec-96b2-3713515bc9c3","Type":"ContainerStarted","Data":"03d7da88fb3063c8413ef0411039a1970bfcb657e45c39dee6143e6d21fd622a"} Dec 05 19:22:59 crc kubenswrapper[4828]: I1205 19:22:59.452607 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 05 19:22:59 crc kubenswrapper[4828]: I1205 19:22:59.455063 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-m8vpv" event={"ID":"8f5d1d7e-96cd-493a-84e4-1a605338a206","Type":"ContainerStarted","Data":"3ed3a7e9a1c8532b4ed8e303814bf722634823ae5587bdc22e609b29fcad4c5f"} Dec 05 19:22:59 crc kubenswrapper[4828]: I1205 19:22:59.461897 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-zcskw" Dec 05 19:22:59 crc kubenswrapper[4828]: I1205 19:22:59.461901 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-zcskw" event={"ID":"24ef607c-0e94-4e51-9fbe-54745774cd5e","Type":"ContainerDied","Data":"3000e4d4d5bc6183d70696a07f1e84b36a4c1fa166ea6cfcd6c57a607c464985"} Dec 05 19:22:59 crc kubenswrapper[4828]: I1205 19:22:59.461955 4828 scope.go:117] "RemoveContainer" containerID="866813dcad96d1097a1856cc762d6d4066aaf70e7d992657a70c2d3799454760" Dec 05 19:22:59 crc kubenswrapper[4828]: I1205 19:22:59.464397 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s6jdb-config-q6t66" event={"ID":"3fa1190b-959c-459f-9be5-715139f3f32c","Type":"ContainerStarted","Data":"24fc686e8bb84233a80a07387215848478a2aa7d9aaa722915d385033e2c58e8"} Dec 05 19:22:59 crc kubenswrapper[4828]: I1205 19:22:59.486542 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.5112136830000003 podStartE2EDuration="4.486520392s" podCreationTimestamp="2025-12-05 19:22:55 +0000 UTC" firstStartedPulling="2025-12-05 19:22:56.301896526 +0000 UTC m=+1154.197118832" lastFinishedPulling="2025-12-05 19:22:58.277203235 +0000 UTC m=+1156.172425541" observedRunningTime="2025-12-05 19:22:59.47730791 +0000 UTC m=+1157.372530236" watchObservedRunningTime="2025-12-05 19:22:59.486520392 +0000 UTC m=+1157.381742698" Dec 05 19:22:59 crc kubenswrapper[4828]: I1205 19:22:59.490315 4828 scope.go:117] "RemoveContainer" containerID="dde742afe00bf4c7a4fdd4a14d6a10741756b07cca8f2f3240f2e3fe0d6f1edd" Dec 05 19:22:59 crc kubenswrapper[4828]: I1205 19:22:59.515089 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-zcskw"] Dec 05 19:22:59 crc kubenswrapper[4828]: I1205 19:22:59.526206 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-zcskw"] Dec 05 19:23:00 crc kubenswrapper[4828]: I1205 19:23:00.456056 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24ef607c-0e94-4e51-9fbe-54745774cd5e" path="/var/lib/kubelet/pods/24ef607c-0e94-4e51-9fbe-54745774cd5e/volumes" Dec 05 19:23:00 crc kubenswrapper[4828]: I1205 19:23:00.500723 4828 generic.go:334] "Generic (PLEG): container finished" podID="3fa1190b-959c-459f-9be5-715139f3f32c" containerID="8027cae6e99adaa1b3c37200de9135597bae28c60cc1d757ebc7b66cf2e51506" exitCode=0 Dec 05 19:23:00 crc kubenswrapper[4828]: I1205 19:23:00.500796 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s6jdb-config-q6t66" event={"ID":"3fa1190b-959c-459f-9be5-715139f3f32c","Type":"ContainerDied","Data":"8027cae6e99adaa1b3c37200de9135597bae28c60cc1d757ebc7b66cf2e51506"} Dec 05 19:23:01 crc kubenswrapper[4828]: I1205 19:23:01.935332 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s6jdb-config-q6t66" Dec 05 19:23:02 crc kubenswrapper[4828]: I1205 19:23:02.017364 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/821554f9-a51e-4a16-a053-b8bc18d93a9e-etc-swift\") pod \"swift-storage-0\" (UID: \"821554f9-a51e-4a16-a053-b8bc18d93a9e\") " pod="openstack/swift-storage-0" Dec 05 19:23:02 crc kubenswrapper[4828]: I1205 19:23:02.027561 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/821554f9-a51e-4a16-a053-b8bc18d93a9e-etc-swift\") pod \"swift-storage-0\" (UID: \"821554f9-a51e-4a16-a053-b8bc18d93a9e\") " pod="openstack/swift-storage-0" Dec 05 19:23:02 crc kubenswrapper[4828]: I1205 19:23:02.074198 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 05 19:23:02 crc kubenswrapper[4828]: I1205 19:23:02.118784 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3fa1190b-959c-459f-9be5-715139f3f32c-var-log-ovn\") pod \"3fa1190b-959c-459f-9be5-715139f3f32c\" (UID: \"3fa1190b-959c-459f-9be5-715139f3f32c\") " Dec 05 19:23:02 crc kubenswrapper[4828]: I1205 19:23:02.118861 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3fa1190b-959c-459f-9be5-715139f3f32c-var-run-ovn\") pod \"3fa1190b-959c-459f-9be5-715139f3f32c\" (UID: \"3fa1190b-959c-459f-9be5-715139f3f32c\") " Dec 05 19:23:02 crc kubenswrapper[4828]: I1205 19:23:02.118891 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3fa1190b-959c-459f-9be5-715139f3f32c-additional-scripts\") pod \"3fa1190b-959c-459f-9be5-715139f3f32c\" (UID: \"3fa1190b-959c-459f-9be5-715139f3f32c\") " Dec 05 19:23:02 crc kubenswrapper[4828]: I1205 19:23:02.118929 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpf5j\" (UniqueName: \"kubernetes.io/projected/3fa1190b-959c-459f-9be5-715139f3f32c-kube-api-access-kpf5j\") pod \"3fa1190b-959c-459f-9be5-715139f3f32c\" (UID: \"3fa1190b-959c-459f-9be5-715139f3f32c\") " Dec 05 19:23:02 crc kubenswrapper[4828]: I1205 19:23:02.118967 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3fa1190b-959c-459f-9be5-715139f3f32c-var-run\") pod \"3fa1190b-959c-459f-9be5-715139f3f32c\" (UID: \"3fa1190b-959c-459f-9be5-715139f3f32c\") " Dec 05 19:23:02 crc kubenswrapper[4828]: I1205 19:23:02.118990 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3fa1190b-959c-459f-9be5-715139f3f32c-scripts\") pod \"3fa1190b-959c-459f-9be5-715139f3f32c\" (UID: \"3fa1190b-959c-459f-9be5-715139f3f32c\") " Dec 05 19:23:02 crc kubenswrapper[4828]: I1205 19:23:02.120034 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fa1190b-959c-459f-9be5-715139f3f32c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "3fa1190b-959c-459f-9be5-715139f3f32c" (UID: "3fa1190b-959c-459f-9be5-715139f3f32c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:23:02 crc kubenswrapper[4828]: I1205 19:23:02.120072 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fa1190b-959c-459f-9be5-715139f3f32c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "3fa1190b-959c-459f-9be5-715139f3f32c" (UID: "3fa1190b-959c-459f-9be5-715139f3f32c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:23:02 crc kubenswrapper[4828]: I1205 19:23:02.120092 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fa1190b-959c-459f-9be5-715139f3f32c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "3fa1190b-959c-459f-9be5-715139f3f32c" (UID: "3fa1190b-959c-459f-9be5-715139f3f32c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:23:02 crc kubenswrapper[4828]: I1205 19:23:02.120208 4828 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3fa1190b-959c-459f-9be5-715139f3f32c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:02 crc kubenswrapper[4828]: I1205 19:23:02.120230 4828 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3fa1190b-959c-459f-9be5-715139f3f32c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:02 crc kubenswrapper[4828]: I1205 19:23:02.120243 4828 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3fa1190b-959c-459f-9be5-715139f3f32c-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:02 crc kubenswrapper[4828]: I1205 19:23:02.120402 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fa1190b-959c-459f-9be5-715139f3f32c-scripts" (OuterVolumeSpecName: "scripts") pod "3fa1190b-959c-459f-9be5-715139f3f32c" (UID: "3fa1190b-959c-459f-9be5-715139f3f32c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:23:02 crc kubenswrapper[4828]: I1205 19:23:02.120455 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fa1190b-959c-459f-9be5-715139f3f32c-var-run" (OuterVolumeSpecName: "var-run") pod "3fa1190b-959c-459f-9be5-715139f3f32c" (UID: "3fa1190b-959c-459f-9be5-715139f3f32c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:23:02 crc kubenswrapper[4828]: I1205 19:23:02.123525 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fa1190b-959c-459f-9be5-715139f3f32c-kube-api-access-kpf5j" (OuterVolumeSpecName: "kube-api-access-kpf5j") pod "3fa1190b-959c-459f-9be5-715139f3f32c" (UID: "3fa1190b-959c-459f-9be5-715139f3f32c"). InnerVolumeSpecName "kube-api-access-kpf5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:23:02 crc kubenswrapper[4828]: I1205 19:23:02.222065 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpf5j\" (UniqueName: \"kubernetes.io/projected/3fa1190b-959c-459f-9be5-715139f3f32c-kube-api-access-kpf5j\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:02 crc kubenswrapper[4828]: I1205 19:23:02.222449 4828 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3fa1190b-959c-459f-9be5-715139f3f32c-var-run\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:02 crc kubenswrapper[4828]: I1205 19:23:02.222466 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3fa1190b-959c-459f-9be5-715139f3f32c-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:02 crc kubenswrapper[4828]: I1205 19:23:02.518200 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s6jdb-config-q6t66" Dec 05 19:23:02 crc kubenswrapper[4828]: I1205 19:23:02.518207 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s6jdb-config-q6t66" event={"ID":"3fa1190b-959c-459f-9be5-715139f3f32c","Type":"ContainerDied","Data":"24fc686e8bb84233a80a07387215848478a2aa7d9aaa722915d385033e2c58e8"} Dec 05 19:23:02 crc kubenswrapper[4828]: I1205 19:23:02.518533 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24fc686e8bb84233a80a07387215848478a2aa7d9aaa722915d385033e2c58e8" Dec 05 19:23:02 crc kubenswrapper[4828]: I1205 19:23:02.520796 4828 generic.go:334] "Generic (PLEG): container finished" podID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" containerID="5ae29702b9c693bc225b109d7199f5610a3f177228a2fa0ff8ce44ca6c251dda" exitCode=1 Dec 05 19:23:02 crc kubenswrapper[4828]: I1205 19:23:02.520864 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" event={"ID":"03c4fc5d-6be1-47b4-9c39-7bb86046dafd","Type":"ContainerDied","Data":"5ae29702b9c693bc225b109d7199f5610a3f177228a2fa0ff8ce44ca6c251dda"} Dec 05 19:23:02 crc kubenswrapper[4828]: I1205 19:23:02.521338 4828 scope.go:117] "RemoveContainer" containerID="5ae29702b9c693bc225b109d7199f5610a3f177228a2fa0ff8ce44ca6c251dda" Dec 05 19:23:02 crc kubenswrapper[4828]: I1205 19:23:02.680707 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 05 19:23:02 crc kubenswrapper[4828]: W1205 19:23:02.689590 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod821554f9_a51e_4a16_a053_b8bc18d93a9e.slice/crio-3e58da31c10747ba136ab46f3077ae49c5266230f9de041ff57f01be289e8c98 WatchSource:0}: Error finding container 3e58da31c10747ba136ab46f3077ae49c5266230f9de041ff57f01be289e8c98: Status 404 returned error can't find the container with id 3e58da31c10747ba136ab46f3077ae49c5266230f9de041ff57f01be289e8c98 Dec 05 19:23:03 crc kubenswrapper[4828]: I1205 19:23:03.049208 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-s6jdb-config-q6t66"] Dec 05 19:23:03 crc kubenswrapper[4828]: I1205 19:23:03.056086 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 05 19:23:03 crc kubenswrapper[4828]: I1205 19:23:03.056389 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-s6jdb-config-q6t66"] Dec 05 19:23:03 crc kubenswrapper[4828]: I1205 19:23:03.139266 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-s6jdb" Dec 05 19:23:03 crc kubenswrapper[4828]: I1205 19:23:03.349033 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:23:03 crc kubenswrapper[4828]: I1205 19:23:03.537796 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" event={"ID":"03c4fc5d-6be1-47b4-9c39-7bb86046dafd","Type":"ContainerStarted","Data":"430af8e018b4db94e5fbc1658ab5c48af8bdcbbed4d9e9f4a8b1c4d49b774c99"} Dec 05 19:23:03 crc kubenswrapper[4828]: I1205 19:23:03.538356 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:23:03 crc kubenswrapper[4828]: I1205 19:23:03.540661 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"821554f9-a51e-4a16-a053-b8bc18d93a9e","Type":"ContainerStarted","Data":"3e58da31c10747ba136ab46f3077ae49c5266230f9de041ff57f01be289e8c98"} Dec 05 19:23:04 crc kubenswrapper[4828]: I1205 19:23:04.457605 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fa1190b-959c-459f-9be5-715139f3f32c" path="/var/lib/kubelet/pods/3fa1190b-959c-459f-9be5-715139f3f32c/volumes" Dec 05 19:23:04 crc kubenswrapper[4828]: I1205 19:23:04.550202 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"821554f9-a51e-4a16-a053-b8bc18d93a9e","Type":"ContainerStarted","Data":"ac6659e95b621e3a2990970c4a5e489da5c96bd1f318bd3009566154e9655b07"} Dec 05 19:23:05 crc kubenswrapper[4828]: I1205 19:23:05.259602 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:23:05 crc kubenswrapper[4828]: I1205 19:23:05.260067 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:23:05 crc kubenswrapper[4828]: I1205 19:23:05.260127 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" Dec 05 19:23:05 crc kubenswrapper[4828]: I1205 19:23:05.260995 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0cc286b8dceed84d395e55058b3c3160e80eae904633740211fa06dda4862d4f"} pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 19:23:05 crc kubenswrapper[4828]: I1205 19:23:05.261088 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" containerID="cri-o://0cc286b8dceed84d395e55058b3c3160e80eae904633740211fa06dda4862d4f" gracePeriod=600 Dec 05 19:23:05 crc kubenswrapper[4828]: E1205 19:23:05.266666 4828 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.98:52696->38.102.83.98:46169: write tcp 38.102.83.98:52696->38.102.83.98:46169: write: broken pipe Dec 05 19:23:05 crc kubenswrapper[4828]: I1205 19:23:05.560680 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"821554f9-a51e-4a16-a053-b8bc18d93a9e","Type":"ContainerStarted","Data":"2fc778c09df6ddfc7ec9154e11899ef7dadf412645c85a0ca3f4ab64c4c5eb71"} Dec 05 19:23:05 crc kubenswrapper[4828]: I1205 19:23:05.563846 4828 generic.go:334] "Generic (PLEG): container finished" podID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerID="0cc286b8dceed84d395e55058b3c3160e80eae904633740211fa06dda4862d4f" exitCode=0 Dec 05 19:23:05 crc kubenswrapper[4828]: I1205 19:23:05.563873 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerDied","Data":"0cc286b8dceed84d395e55058b3c3160e80eae904633740211fa06dda4862d4f"} Dec 05 19:23:05 crc kubenswrapper[4828]: I1205 19:23:05.563895 4828 scope.go:117] "RemoveContainer" containerID="ba6c96d79cafa37f2c2c4a1d891acafd85624229c151c0bd90de50b84f8cad3b" Dec 05 19:23:06 crc kubenswrapper[4828]: E1205 19:23:06.900231 4828 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.98:52712->38.102.83.98:46169: write tcp 38.102.83.98:52712->38.102.83.98:46169: write: connection reset by peer Dec 05 19:23:10 crc kubenswrapper[4828]: I1205 19:23:10.861227 4828 scope.go:117] "RemoveContainer" containerID="a398787b0a26fe19af2503c25bc374336ec360896be427712cce55633539dcdc" Dec 05 19:23:10 crc kubenswrapper[4828]: I1205 19:23:10.932771 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 05 19:23:12 crc kubenswrapper[4828]: I1205 19:23:12.101641 4828 scope.go:117] "RemoveContainer" containerID="33b12ebe7182334519e40fc9a40a82e7b4c163b223e638151b4d1d6427b1a0c1" Dec 05 19:23:12 crc kubenswrapper[4828]: I1205 19:23:12.173644 4828 scope.go:117] "RemoveContainer" containerID="a4a477b8074b9b53bdc28f997dc8e278695759415c28adebdcee8f3bea983ee5" Dec 05 19:23:12 crc kubenswrapper[4828]: I1205 19:23:12.719036 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"821554f9-a51e-4a16-a053-b8bc18d93a9e","Type":"ContainerStarted","Data":"66979d93e0482d8c2c2b69473cb2bc5c8671198a214f61815c8a62007074f562"} Dec 05 19:23:12 crc kubenswrapper[4828]: I1205 19:23:12.719361 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"821554f9-a51e-4a16-a053-b8bc18d93a9e","Type":"ContainerStarted","Data":"b70f8ca8c19409c23356e1001f663f97f7ea97289a640dddd9b92a0a4457823b"} Dec 05 19:23:12 crc kubenswrapper[4828]: I1205 19:23:12.721208 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerStarted","Data":"aab20e62cb85e96facfecb4602cb199c408644c9ab8b87bd02db08dd9a3628e0"} Dec 05 19:23:13 crc kubenswrapper[4828]: I1205 19:23:13.730270 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-m8vpv" event={"ID":"8f5d1d7e-96cd-493a-84e4-1a605338a206","Type":"ContainerStarted","Data":"c46a0ac6be5918360885a9f6a2919dca06b0185ad56f87ab0c0141d530443453"} Dec 05 19:23:13 crc kubenswrapper[4828]: I1205 19:23:13.777760 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-m8vpv" podStartSLOduration=3.446072476 podStartE2EDuration="16.777730637s" podCreationTimestamp="2025-12-05 19:22:57 +0000 UTC" firstStartedPulling="2025-12-05 19:22:58.854933755 +0000 UTC m=+1156.750156061" lastFinishedPulling="2025-12-05 19:23:12.186591916 +0000 UTC m=+1170.081814222" observedRunningTime="2025-12-05 19:23:13.76067793 +0000 UTC m=+1171.655900256" watchObservedRunningTime="2025-12-05 19:23:13.777730637 +0000 UTC m=+1171.672952993" Dec 05 19:23:14 crc kubenswrapper[4828]: I1205 19:23:14.740988 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"821554f9-a51e-4a16-a053-b8bc18d93a9e","Type":"ContainerStarted","Data":"6413550c2b86926618d937516b4cb597d547e1835474174a20f17d5a016c1b51"} Dec 05 19:23:14 crc kubenswrapper[4828]: I1205 19:23:14.741278 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"821554f9-a51e-4a16-a053-b8bc18d93a9e","Type":"ContainerStarted","Data":"1086d9db5eef40e88db626e3b49004d6da428ad023adae28d30563840aa58388"} Dec 05 19:23:14 crc kubenswrapper[4828]: I1205 19:23:14.741300 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"821554f9-a51e-4a16-a053-b8bc18d93a9e","Type":"ContainerStarted","Data":"169b040e006329871cc49c15daec35e0164d6e4230e645ba09e07f1f99787e9d"} Dec 05 19:23:14 crc kubenswrapper[4828]: I1205 19:23:14.741312 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"821554f9-a51e-4a16-a053-b8bc18d93a9e","Type":"ContainerStarted","Data":"bf3bfe82e7cfe8ceef8ca3e1cf77369a9da71172a4dbbd51191f3706e224af22"} Dec 05 19:23:15 crc kubenswrapper[4828]: I1205 19:23:15.126351 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:23:16 crc kubenswrapper[4828]: I1205 19:23:16.766385 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"821554f9-a51e-4a16-a053-b8bc18d93a9e","Type":"ContainerStarted","Data":"f06bb2bb9a09970aeff0f46a2ff24e6aa6ad6380c9946451ee37f0f2a6ba11e5"} Dec 05 19:23:16 crc kubenswrapper[4828]: I1205 19:23:16.766980 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"821554f9-a51e-4a16-a053-b8bc18d93a9e","Type":"ContainerStarted","Data":"028dafbc639f708479f4f58fef241eea26740ef38b588d130d35411ff7d9c804"} Dec 05 19:23:16 crc kubenswrapper[4828]: I1205 19:23:16.766993 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"821554f9-a51e-4a16-a053-b8bc18d93a9e","Type":"ContainerStarted","Data":"99fc8f3c33d8148bffe8600dde7f5bbfd46ef561740671e3c624a7722e9e3553"} Dec 05 19:23:17 crc kubenswrapper[4828]: I1205 19:23:17.777187 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"821554f9-a51e-4a16-a053-b8bc18d93a9e","Type":"ContainerStarted","Data":"729a66decc986d321839c4cc24ed7f6f877d900b45e4b03b144fb896ca04908f"} Dec 05 19:23:17 crc kubenswrapper[4828]: I1205 19:23:17.777717 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"821554f9-a51e-4a16-a053-b8bc18d93a9e","Type":"ContainerStarted","Data":"955ef9cd5c13334c8c25e0a0bab71663dd717d6b6b167e1a264c3c2d34e086fb"} Dec 05 19:23:17 crc kubenswrapper[4828]: I1205 19:23:17.777736 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"821554f9-a51e-4a16-a053-b8bc18d93a9e","Type":"ContainerStarted","Data":"4fd1d324113fb4a3c594ce40155c3fa3d128a257b90244796e94309b9855c6dd"} Dec 05 19:23:18 crc kubenswrapper[4828]: I1205 19:23:18.803718 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"821554f9-a51e-4a16-a053-b8bc18d93a9e","Type":"ContainerStarted","Data":"2f730800007eb4532902b866bd210223aaf1cb674eaaad4275cf538a60ade22a"} Dec 05 19:23:18 crc kubenswrapper[4828]: I1205 19:23:18.864359 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.422581356 podStartE2EDuration="50.864339632s" podCreationTimestamp="2025-12-05 19:22:28 +0000 UTC" firstStartedPulling="2025-12-05 19:23:02.692133885 +0000 UTC m=+1160.587356191" lastFinishedPulling="2025-12-05 19:23:16.133892161 +0000 UTC m=+1174.029114467" observedRunningTime="2025-12-05 19:23:18.857332719 +0000 UTC m=+1176.752555035" watchObservedRunningTime="2025-12-05 19:23:18.864339632 +0000 UTC m=+1176.759561948" Dec 05 19:23:19 crc kubenswrapper[4828]: I1205 19:23:19.815637 4828 generic.go:334] "Generic (PLEG): container finished" podID="8f5d1d7e-96cd-493a-84e4-1a605338a206" containerID="c46a0ac6be5918360885a9f6a2919dca06b0185ad56f87ab0c0141d530443453" exitCode=0 Dec 05 19:23:19 crc kubenswrapper[4828]: I1205 19:23:19.815749 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-m8vpv" event={"ID":"8f5d1d7e-96cd-493a-84e4-1a605338a206","Type":"ContainerDied","Data":"c46a0ac6be5918360885a9f6a2919dca06b0185ad56f87ab0c0141d530443453"} Dec 05 19:23:21 crc kubenswrapper[4828]: I1205 19:23:21.252787 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-m8vpv" Dec 05 19:23:21 crc kubenswrapper[4828]: I1205 19:23:21.419213 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5d1d7e-96cd-493a-84e4-1a605338a206-combined-ca-bundle\") pod \"8f5d1d7e-96cd-493a-84e4-1a605338a206\" (UID: \"8f5d1d7e-96cd-493a-84e4-1a605338a206\") " Dec 05 19:23:21 crc kubenswrapper[4828]: I1205 19:23:21.419696 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8f5d1d7e-96cd-493a-84e4-1a605338a206-db-sync-config-data\") pod \"8f5d1d7e-96cd-493a-84e4-1a605338a206\" (UID: \"8f5d1d7e-96cd-493a-84e4-1a605338a206\") " Dec 05 19:23:21 crc kubenswrapper[4828]: I1205 19:23:21.419766 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrtnq\" (UniqueName: \"kubernetes.io/projected/8f5d1d7e-96cd-493a-84e4-1a605338a206-kube-api-access-xrtnq\") pod \"8f5d1d7e-96cd-493a-84e4-1a605338a206\" (UID: \"8f5d1d7e-96cd-493a-84e4-1a605338a206\") " Dec 05 19:23:21 crc kubenswrapper[4828]: I1205 19:23:21.419812 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f5d1d7e-96cd-493a-84e4-1a605338a206-config-data\") pod \"8f5d1d7e-96cd-493a-84e4-1a605338a206\" (UID: \"8f5d1d7e-96cd-493a-84e4-1a605338a206\") " Dec 05 19:23:21 crc kubenswrapper[4828]: I1205 19:23:21.425028 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f5d1d7e-96cd-493a-84e4-1a605338a206-kube-api-access-xrtnq" (OuterVolumeSpecName: "kube-api-access-xrtnq") pod "8f5d1d7e-96cd-493a-84e4-1a605338a206" (UID: "8f5d1d7e-96cd-493a-84e4-1a605338a206"). InnerVolumeSpecName "kube-api-access-xrtnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:23:21 crc kubenswrapper[4828]: I1205 19:23:21.425850 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f5d1d7e-96cd-493a-84e4-1a605338a206-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8f5d1d7e-96cd-493a-84e4-1a605338a206" (UID: "8f5d1d7e-96cd-493a-84e4-1a605338a206"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:23:21 crc kubenswrapper[4828]: I1205 19:23:21.465077 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f5d1d7e-96cd-493a-84e4-1a605338a206-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f5d1d7e-96cd-493a-84e4-1a605338a206" (UID: "8f5d1d7e-96cd-493a-84e4-1a605338a206"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:23:21 crc kubenswrapper[4828]: I1205 19:23:21.487944 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f5d1d7e-96cd-493a-84e4-1a605338a206-config-data" (OuterVolumeSpecName: "config-data") pod "8f5d1d7e-96cd-493a-84e4-1a605338a206" (UID: "8f5d1d7e-96cd-493a-84e4-1a605338a206"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:23:21 crc kubenswrapper[4828]: I1205 19:23:21.522418 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5d1d7e-96cd-493a-84e4-1a605338a206-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:21 crc kubenswrapper[4828]: I1205 19:23:21.522453 4828 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8f5d1d7e-96cd-493a-84e4-1a605338a206-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:21 crc kubenswrapper[4828]: I1205 19:23:21.522465 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrtnq\" (UniqueName: \"kubernetes.io/projected/8f5d1d7e-96cd-493a-84e4-1a605338a206-kube-api-access-xrtnq\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:21 crc kubenswrapper[4828]: I1205 19:23:21.522501 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f5d1d7e-96cd-493a-84e4-1a605338a206-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:21 crc kubenswrapper[4828]: I1205 19:23:21.832457 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-m8vpv" event={"ID":"8f5d1d7e-96cd-493a-84e4-1a605338a206","Type":"ContainerDied","Data":"3ed3a7e9a1c8532b4ed8e303814bf722634823ae5587bdc22e609b29fcad4c5f"} Dec 05 19:23:21 crc kubenswrapper[4828]: I1205 19:23:21.832504 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ed3a7e9a1c8532b4ed8e303814bf722634823ae5587bdc22e609b29fcad4c5f" Dec 05 19:23:21 crc kubenswrapper[4828]: I1205 19:23:21.832558 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-m8vpv" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.071935 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-hls9v"] Dec 05 19:23:43 crc kubenswrapper[4828]: E1205 19:23:43.072919 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ef607c-0e94-4e51-9fbe-54745774cd5e" containerName="init" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.072938 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ef607c-0e94-4e51-9fbe-54745774cd5e" containerName="init" Dec 05 19:23:43 crc kubenswrapper[4828]: E1205 19:23:43.072951 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fa1190b-959c-459f-9be5-715139f3f32c" containerName="ovn-config" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.072958 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fa1190b-959c-459f-9be5-715139f3f32c" containerName="ovn-config" Dec 05 19:23:43 crc kubenswrapper[4828]: E1205 19:23:43.072975 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f5d1d7e-96cd-493a-84e4-1a605338a206" containerName="glance-db-sync" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.072982 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f5d1d7e-96cd-493a-84e4-1a605338a206" containerName="glance-db-sync" Dec 05 19:23:43 crc kubenswrapper[4828]: E1205 19:23:43.073001 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ef607c-0e94-4e51-9fbe-54745774cd5e" containerName="dnsmasq-dns" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.073008 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ef607c-0e94-4e51-9fbe-54745774cd5e" containerName="dnsmasq-dns" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.073217 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f5d1d7e-96cd-493a-84e4-1a605338a206" containerName="glance-db-sync" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.073245 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="24ef607c-0e94-4e51-9fbe-54745774cd5e" containerName="dnsmasq-dns" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.073276 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fa1190b-959c-459f-9be5-715139f3f32c" containerName="ovn-config" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.074355 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-hls9v" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.090977 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-hls9v"] Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.146258 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-hls9v"] Dec 05 19:23:43 crc kubenswrapper[4828]: E1205 19:23:43.146859 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-bvfk2 ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-5b946c75cc-hls9v" podUID="065a4655-bbd3-4a8e-9daf-56c03db0e6c0" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.172519 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-rf9j4"] Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.173890 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.176636 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.191972 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-rf9j4"] Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.201121 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-6lkzn"] Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.202315 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6lkzn" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.212918 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-6lkzn"] Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.221509 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065a4655-bbd3-4a8e-9daf-56c03db0e6c0-config\") pod \"dnsmasq-dns-5b946c75cc-hls9v\" (UID: \"065a4655-bbd3-4a8e-9daf-56c03db0e6c0\") " pod="openstack/dnsmasq-dns-5b946c75cc-hls9v" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.221589 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvfk2\" (UniqueName: \"kubernetes.io/projected/065a4655-bbd3-4a8e-9daf-56c03db0e6c0-kube-api-access-bvfk2\") pod \"dnsmasq-dns-5b946c75cc-hls9v\" (UID: \"065a4655-bbd3-4a8e-9daf-56c03db0e6c0\") " pod="openstack/dnsmasq-dns-5b946c75cc-hls9v" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.221616 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/065a4655-bbd3-4a8e-9daf-56c03db0e6c0-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-hls9v\" (UID: \"065a4655-bbd3-4a8e-9daf-56c03db0e6c0\") " pod="openstack/dnsmasq-dns-5b946c75cc-hls9v" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.221639 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/065a4655-bbd3-4a8e-9daf-56c03db0e6c0-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-hls9v\" (UID: \"065a4655-bbd3-4a8e-9daf-56c03db0e6c0\") " pod="openstack/dnsmasq-dns-5b946c75cc-hls9v" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.221654 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/065a4655-bbd3-4a8e-9daf-56c03db0e6c0-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-hls9v\" (UID: \"065a4655-bbd3-4a8e-9daf-56c03db0e6c0\") " pod="openstack/dnsmasq-dns-5b946c75cc-hls9v" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.221679 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-5613-account-create-update-9xhmx"] Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.222708 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5613-account-create-update-9xhmx" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.226307 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.243268 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5613-account-create-update-9xhmx"] Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.323194 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065a4655-bbd3-4a8e-9daf-56c03db0e6c0-config\") pod \"dnsmasq-dns-5b946c75cc-hls9v\" (UID: \"065a4655-bbd3-4a8e-9daf-56c03db0e6c0\") " pod="openstack/dnsmasq-dns-5b946c75cc-hls9v" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.323245 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/398a387b-e59c-486c-a39c-a0e0f45c75a2-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-rf9j4\" (UID: \"398a387b-e59c-486c-a39c-a0e0f45c75a2\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.323278 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/398a387b-e59c-486c-a39c-a0e0f45c75a2-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-rf9j4\" (UID: \"398a387b-e59c-486c-a39c-a0e0f45c75a2\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.323315 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxclm\" (UniqueName: \"kubernetes.io/projected/9628efae-96b7-43fb-a5cc-05279f664d77-kube-api-access-rxclm\") pod \"barbican-5613-account-create-update-9xhmx\" (UID: \"9628efae-96b7-43fb-a5cc-05279f664d77\") " pod="openstack/barbican-5613-account-create-update-9xhmx" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.323337 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvfk2\" (UniqueName: \"kubernetes.io/projected/065a4655-bbd3-4a8e-9daf-56c03db0e6c0-kube-api-access-bvfk2\") pod \"dnsmasq-dns-5b946c75cc-hls9v\" (UID: \"065a4655-bbd3-4a8e-9daf-56c03db0e6c0\") " pod="openstack/dnsmasq-dns-5b946c75cc-hls9v" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.323356 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/398a387b-e59c-486c-a39c-a0e0f45c75a2-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-rf9j4\" (UID: \"398a387b-e59c-486c-a39c-a0e0f45c75a2\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.323374 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/065a4655-bbd3-4a8e-9daf-56c03db0e6c0-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-hls9v\" (UID: \"065a4655-bbd3-4a8e-9daf-56c03db0e6c0\") " pod="openstack/dnsmasq-dns-5b946c75cc-hls9v" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.323393 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvfr8\" (UniqueName: \"kubernetes.io/projected/176021e9-3e79-43bb-9c96-54d69defaba1-kube-api-access-zvfr8\") pod \"barbican-db-create-6lkzn\" (UID: \"176021e9-3e79-43bb-9c96-54d69defaba1\") " pod="openstack/barbican-db-create-6lkzn" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.323413 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/065a4655-bbd3-4a8e-9daf-56c03db0e6c0-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-hls9v\" (UID: \"065a4655-bbd3-4a8e-9daf-56c03db0e6c0\") " pod="openstack/dnsmasq-dns-5b946c75cc-hls9v" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.323429 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/065a4655-bbd3-4a8e-9daf-56c03db0e6c0-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-hls9v\" (UID: \"065a4655-bbd3-4a8e-9daf-56c03db0e6c0\") " pod="openstack/dnsmasq-dns-5b946c75cc-hls9v" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.323456 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9628efae-96b7-43fb-a5cc-05279f664d77-operator-scripts\") pod \"barbican-5613-account-create-update-9xhmx\" (UID: \"9628efae-96b7-43fb-a5cc-05279f664d77\") " pod="openstack/barbican-5613-account-create-update-9xhmx" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.323477 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/176021e9-3e79-43bb-9c96-54d69defaba1-operator-scripts\") pod \"barbican-db-create-6lkzn\" (UID: \"176021e9-3e79-43bb-9c96-54d69defaba1\") " pod="openstack/barbican-db-create-6lkzn" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.323492 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/398a387b-e59c-486c-a39c-a0e0f45c75a2-config\") pod \"dnsmasq-dns-74f6bcbc87-rf9j4\" (UID: \"398a387b-e59c-486c-a39c-a0e0f45c75a2\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.323514 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/398a387b-e59c-486c-a39c-a0e0f45c75a2-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-rf9j4\" (UID: \"398a387b-e59c-486c-a39c-a0e0f45c75a2\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.323533 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2hth\" (UniqueName: \"kubernetes.io/projected/398a387b-e59c-486c-a39c-a0e0f45c75a2-kube-api-access-s2hth\") pod \"dnsmasq-dns-74f6bcbc87-rf9j4\" (UID: \"398a387b-e59c-486c-a39c-a0e0f45c75a2\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.324369 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/065a4655-bbd3-4a8e-9daf-56c03db0e6c0-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-hls9v\" (UID: \"065a4655-bbd3-4a8e-9daf-56c03db0e6c0\") " pod="openstack/dnsmasq-dns-5b946c75cc-hls9v" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.324544 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065a4655-bbd3-4a8e-9daf-56c03db0e6c0-config\") pod \"dnsmasq-dns-5b946c75cc-hls9v\" (UID: \"065a4655-bbd3-4a8e-9daf-56c03db0e6c0\") " pod="openstack/dnsmasq-dns-5b946c75cc-hls9v" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.325003 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/065a4655-bbd3-4a8e-9daf-56c03db0e6c0-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-hls9v\" (UID: \"065a4655-bbd3-4a8e-9daf-56c03db0e6c0\") " pod="openstack/dnsmasq-dns-5b946c75cc-hls9v" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.325347 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/065a4655-bbd3-4a8e-9daf-56c03db0e6c0-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-hls9v\" (UID: \"065a4655-bbd3-4a8e-9daf-56c03db0e6c0\") " pod="openstack/dnsmasq-dns-5b946c75cc-hls9v" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.339763 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvfk2\" (UniqueName: \"kubernetes.io/projected/065a4655-bbd3-4a8e-9daf-56c03db0e6c0-kube-api-access-bvfk2\") pod \"dnsmasq-dns-5b946c75cc-hls9v\" (UID: \"065a4655-bbd3-4a8e-9daf-56c03db0e6c0\") " pod="openstack/dnsmasq-dns-5b946c75cc-hls9v" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.425262 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/398a387b-e59c-486c-a39c-a0e0f45c75a2-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-rf9j4\" (UID: \"398a387b-e59c-486c-a39c-a0e0f45c75a2\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.425345 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/398a387b-e59c-486c-a39c-a0e0f45c75a2-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-rf9j4\" (UID: \"398a387b-e59c-486c-a39c-a0e0f45c75a2\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.425437 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxclm\" (UniqueName: \"kubernetes.io/projected/9628efae-96b7-43fb-a5cc-05279f664d77-kube-api-access-rxclm\") pod \"barbican-5613-account-create-update-9xhmx\" (UID: \"9628efae-96b7-43fb-a5cc-05279f664d77\") " pod="openstack/barbican-5613-account-create-update-9xhmx" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.425465 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/398a387b-e59c-486c-a39c-a0e0f45c75a2-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-rf9j4\" (UID: \"398a387b-e59c-486c-a39c-a0e0f45c75a2\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.425903 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvfr8\" (UniqueName: \"kubernetes.io/projected/176021e9-3e79-43bb-9c96-54d69defaba1-kube-api-access-zvfr8\") pod \"barbican-db-create-6lkzn\" (UID: \"176021e9-3e79-43bb-9c96-54d69defaba1\") " pod="openstack/barbican-db-create-6lkzn" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.425961 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9628efae-96b7-43fb-a5cc-05279f664d77-operator-scripts\") pod \"barbican-5613-account-create-update-9xhmx\" (UID: \"9628efae-96b7-43fb-a5cc-05279f664d77\") " pod="openstack/barbican-5613-account-create-update-9xhmx" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.425990 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/176021e9-3e79-43bb-9c96-54d69defaba1-operator-scripts\") pod \"barbican-db-create-6lkzn\" (UID: \"176021e9-3e79-43bb-9c96-54d69defaba1\") " pod="openstack/barbican-db-create-6lkzn" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.426013 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/398a387b-e59c-486c-a39c-a0e0f45c75a2-config\") pod \"dnsmasq-dns-74f6bcbc87-rf9j4\" (UID: \"398a387b-e59c-486c-a39c-a0e0f45c75a2\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.426081 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/398a387b-e59c-486c-a39c-a0e0f45c75a2-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-rf9j4\" (UID: \"398a387b-e59c-486c-a39c-a0e0f45c75a2\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.426105 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2hth\" (UniqueName: \"kubernetes.io/projected/398a387b-e59c-486c-a39c-a0e0f45c75a2-kube-api-access-s2hth\") pod \"dnsmasq-dns-74f6bcbc87-rf9j4\" (UID: \"398a387b-e59c-486c-a39c-a0e0f45c75a2\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.426390 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/398a387b-e59c-486c-a39c-a0e0f45c75a2-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-rf9j4\" (UID: \"398a387b-e59c-486c-a39c-a0e0f45c75a2\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.426499 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/398a387b-e59c-486c-a39c-a0e0f45c75a2-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-rf9j4\" (UID: \"398a387b-e59c-486c-a39c-a0e0f45c75a2\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.427053 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/398a387b-e59c-486c-a39c-a0e0f45c75a2-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-rf9j4\" (UID: \"398a387b-e59c-486c-a39c-a0e0f45c75a2\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.427084 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/176021e9-3e79-43bb-9c96-54d69defaba1-operator-scripts\") pod \"barbican-db-create-6lkzn\" (UID: \"176021e9-3e79-43bb-9c96-54d69defaba1\") " pod="openstack/barbican-db-create-6lkzn" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.427222 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/398a387b-e59c-486c-a39c-a0e0f45c75a2-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-rf9j4\" (UID: \"398a387b-e59c-486c-a39c-a0e0f45c75a2\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.427242 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9628efae-96b7-43fb-a5cc-05279f664d77-operator-scripts\") pod \"barbican-5613-account-create-update-9xhmx\" (UID: \"9628efae-96b7-43fb-a5cc-05279f664d77\") " pod="openstack/barbican-5613-account-create-update-9xhmx" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.427357 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/398a387b-e59c-486c-a39c-a0e0f45c75a2-config\") pod \"dnsmasq-dns-74f6bcbc87-rf9j4\" (UID: \"398a387b-e59c-486c-a39c-a0e0f45c75a2\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.443659 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2hth\" (UniqueName: \"kubernetes.io/projected/398a387b-e59c-486c-a39c-a0e0f45c75a2-kube-api-access-s2hth\") pod \"dnsmasq-dns-74f6bcbc87-rf9j4\" (UID: \"398a387b-e59c-486c-a39c-a0e0f45c75a2\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.445450 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxclm\" (UniqueName: \"kubernetes.io/projected/9628efae-96b7-43fb-a5cc-05279f664d77-kube-api-access-rxclm\") pod \"barbican-5613-account-create-update-9xhmx\" (UID: \"9628efae-96b7-43fb-a5cc-05279f664d77\") " pod="openstack/barbican-5613-account-create-update-9xhmx" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.447726 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvfr8\" (UniqueName: \"kubernetes.io/projected/176021e9-3e79-43bb-9c96-54d69defaba1-kube-api-access-zvfr8\") pod \"barbican-db-create-6lkzn\" (UID: \"176021e9-3e79-43bb-9c96-54d69defaba1\") " pod="openstack/barbican-db-create-6lkzn" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.486008 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-vfkq9"] Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.487256 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vfkq9" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.500243 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.502172 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b57a-account-create-update-xqckx"] Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.503711 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b57a-account-create-update-xqckx" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.505757 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.509434 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-vfkq9"] Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.517977 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b57a-account-create-update-xqckx"] Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.522508 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6lkzn" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.540728 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5613-account-create-update-9xhmx" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.593457 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-fcdj6"] Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.594947 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fcdj6" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.608570 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-fcdj6"] Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.629756 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmw66\" (UniqueName: \"kubernetes.io/projected/5bba95cd-b21d-4f44-b575-59527cf3b537-kube-api-access-tmw66\") pod \"cinder-b57a-account-create-update-xqckx\" (UID: \"5bba95cd-b21d-4f44-b575-59527cf3b537\") " pod="openstack/cinder-b57a-account-create-update-xqckx" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.630053 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3fd15a0-f362-4f74-bde6-0df71598dcc9-operator-scripts\") pod \"cinder-db-create-vfkq9\" (UID: \"b3fd15a0-f362-4f74-bde6-0df71598dcc9\") " pod="openstack/cinder-db-create-vfkq9" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.630241 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bba95cd-b21d-4f44-b575-59527cf3b537-operator-scripts\") pod \"cinder-b57a-account-create-update-xqckx\" (UID: \"5bba95cd-b21d-4f44-b575-59527cf3b537\") " pod="openstack/cinder-b57a-account-create-update-xqckx" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.630357 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtwv7\" (UniqueName: \"kubernetes.io/projected/b3fd15a0-f362-4f74-bde6-0df71598dcc9-kube-api-access-xtwv7\") pod \"cinder-db-create-vfkq9\" (UID: \"b3fd15a0-f362-4f74-bde6-0df71598dcc9\") " pod="openstack/cinder-db-create-vfkq9" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.715690 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-aa5a-account-create-update-44sfn"] Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.717313 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-aa5a-account-create-update-44sfn" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.721028 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.739175 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c341eebe-b27c-4dee-bb8e-477cd913128b-operator-scripts\") pod \"neutron-db-create-fcdj6\" (UID: \"c341eebe-b27c-4dee-bb8e-477cd913128b\") " pod="openstack/neutron-db-create-fcdj6" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.739261 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmw66\" (UniqueName: \"kubernetes.io/projected/5bba95cd-b21d-4f44-b575-59527cf3b537-kube-api-access-tmw66\") pod \"cinder-b57a-account-create-update-xqckx\" (UID: \"5bba95cd-b21d-4f44-b575-59527cf3b537\") " pod="openstack/cinder-b57a-account-create-update-xqckx" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.739294 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3fd15a0-f362-4f74-bde6-0df71598dcc9-operator-scripts\") pod \"cinder-db-create-vfkq9\" (UID: \"b3fd15a0-f362-4f74-bde6-0df71598dcc9\") " pod="openstack/cinder-db-create-vfkq9" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.739412 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bba95cd-b21d-4f44-b575-59527cf3b537-operator-scripts\") pod \"cinder-b57a-account-create-update-xqckx\" (UID: \"5bba95cd-b21d-4f44-b575-59527cf3b537\") " pod="openstack/cinder-b57a-account-create-update-xqckx" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.739496 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsd5s\" (UniqueName: \"kubernetes.io/projected/c341eebe-b27c-4dee-bb8e-477cd913128b-kube-api-access-gsd5s\") pod \"neutron-db-create-fcdj6\" (UID: \"c341eebe-b27c-4dee-bb8e-477cd913128b\") " pod="openstack/neutron-db-create-fcdj6" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.739568 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtwv7\" (UniqueName: \"kubernetes.io/projected/b3fd15a0-f362-4f74-bde6-0df71598dcc9-kube-api-access-xtwv7\") pod \"cinder-db-create-vfkq9\" (UID: \"b3fd15a0-f362-4f74-bde6-0df71598dcc9\") " pod="openstack/cinder-db-create-vfkq9" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.740567 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bba95cd-b21d-4f44-b575-59527cf3b537-operator-scripts\") pod \"cinder-b57a-account-create-update-xqckx\" (UID: \"5bba95cd-b21d-4f44-b575-59527cf3b537\") " pod="openstack/cinder-b57a-account-create-update-xqckx" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.740589 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3fd15a0-f362-4f74-bde6-0df71598dcc9-operator-scripts\") pod \"cinder-db-create-vfkq9\" (UID: \"b3fd15a0-f362-4f74-bde6-0df71598dcc9\") " pod="openstack/cinder-db-create-vfkq9" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.748707 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-aa5a-account-create-update-44sfn"] Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.759624 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-dks6t"] Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.761488 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dks6t" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.763602 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.764348 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtwv7\" (UniqueName: \"kubernetes.io/projected/b3fd15a0-f362-4f74-bde6-0df71598dcc9-kube-api-access-xtwv7\") pod \"cinder-db-create-vfkq9\" (UID: \"b3fd15a0-f362-4f74-bde6-0df71598dcc9\") " pod="openstack/cinder-db-create-vfkq9" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.764552 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.764728 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.765044 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7rdh4" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.777853 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmw66\" (UniqueName: \"kubernetes.io/projected/5bba95cd-b21d-4f44-b575-59527cf3b537-kube-api-access-tmw66\") pod \"cinder-b57a-account-create-update-xqckx\" (UID: \"5bba95cd-b21d-4f44-b575-59527cf3b537\") " pod="openstack/cinder-b57a-account-create-update-xqckx" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.780019 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-dks6t"] Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.841176 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26009bb4-7abf-4522-94c2-e63a94f8c7cb-operator-scripts\") pod \"neutron-aa5a-account-create-update-44sfn\" (UID: \"26009bb4-7abf-4522-94c2-e63a94f8c7cb\") " pod="openstack/neutron-aa5a-account-create-update-44sfn" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.841278 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g576w\" (UniqueName: \"kubernetes.io/projected/26009bb4-7abf-4522-94c2-e63a94f8c7cb-kube-api-access-g576w\") pod \"neutron-aa5a-account-create-update-44sfn\" (UID: \"26009bb4-7abf-4522-94c2-e63a94f8c7cb\") " pod="openstack/neutron-aa5a-account-create-update-44sfn" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.841349 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsd5s\" (UniqueName: \"kubernetes.io/projected/c341eebe-b27c-4dee-bb8e-477cd913128b-kube-api-access-gsd5s\") pod \"neutron-db-create-fcdj6\" (UID: \"c341eebe-b27c-4dee-bb8e-477cd913128b\") " pod="openstack/neutron-db-create-fcdj6" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.841553 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c341eebe-b27c-4dee-bb8e-477cd913128b-operator-scripts\") pod \"neutron-db-create-fcdj6\" (UID: \"c341eebe-b27c-4dee-bb8e-477cd913128b\") " pod="openstack/neutron-db-create-fcdj6" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.842676 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c341eebe-b27c-4dee-bb8e-477cd913128b-operator-scripts\") pod \"neutron-db-create-fcdj6\" (UID: \"c341eebe-b27c-4dee-bb8e-477cd913128b\") " pod="openstack/neutron-db-create-fcdj6" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.855958 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsd5s\" (UniqueName: \"kubernetes.io/projected/c341eebe-b27c-4dee-bb8e-477cd913128b-kube-api-access-gsd5s\") pod \"neutron-db-create-fcdj6\" (UID: \"c341eebe-b27c-4dee-bb8e-477cd913128b\") " pod="openstack/neutron-db-create-fcdj6" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.920786 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vfkq9" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.943174 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26009bb4-7abf-4522-94c2-e63a94f8c7cb-operator-scripts\") pod \"neutron-aa5a-account-create-update-44sfn\" (UID: \"26009bb4-7abf-4522-94c2-e63a94f8c7cb\") " pod="openstack/neutron-aa5a-account-create-update-44sfn" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.943221 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/879d8ff7-8aac-4c43-999f-064029a8b7cf-combined-ca-bundle\") pod \"keystone-db-sync-dks6t\" (UID: \"879d8ff7-8aac-4c43-999f-064029a8b7cf\") " pod="openstack/keystone-db-sync-dks6t" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.943273 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p55cg\" (UniqueName: \"kubernetes.io/projected/879d8ff7-8aac-4c43-999f-064029a8b7cf-kube-api-access-p55cg\") pod \"keystone-db-sync-dks6t\" (UID: \"879d8ff7-8aac-4c43-999f-064029a8b7cf\") " pod="openstack/keystone-db-sync-dks6t" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.943300 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g576w\" (UniqueName: \"kubernetes.io/projected/26009bb4-7abf-4522-94c2-e63a94f8c7cb-kube-api-access-g576w\") pod \"neutron-aa5a-account-create-update-44sfn\" (UID: \"26009bb4-7abf-4522-94c2-e63a94f8c7cb\") " pod="openstack/neutron-aa5a-account-create-update-44sfn" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.943341 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/879d8ff7-8aac-4c43-999f-064029a8b7cf-config-data\") pod \"keystone-db-sync-dks6t\" (UID: \"879d8ff7-8aac-4c43-999f-064029a8b7cf\") " pod="openstack/keystone-db-sync-dks6t" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.944060 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26009bb4-7abf-4522-94c2-e63a94f8c7cb-operator-scripts\") pod \"neutron-aa5a-account-create-update-44sfn\" (UID: \"26009bb4-7abf-4522-94c2-e63a94f8c7cb\") " pod="openstack/neutron-aa5a-account-create-update-44sfn" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.947144 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b57a-account-create-update-xqckx" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.960566 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g576w\" (UniqueName: \"kubernetes.io/projected/26009bb4-7abf-4522-94c2-e63a94f8c7cb-kube-api-access-g576w\") pod \"neutron-aa5a-account-create-update-44sfn\" (UID: \"26009bb4-7abf-4522-94c2-e63a94f8c7cb\") " pod="openstack/neutron-aa5a-account-create-update-44sfn" Dec 05 19:23:43 crc kubenswrapper[4828]: I1205 19:23:43.962301 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fcdj6" Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.045028 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/879d8ff7-8aac-4c43-999f-064029a8b7cf-config-data\") pod \"keystone-db-sync-dks6t\" (UID: \"879d8ff7-8aac-4c43-999f-064029a8b7cf\") " pod="openstack/keystone-db-sync-dks6t" Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.045145 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/879d8ff7-8aac-4c43-999f-064029a8b7cf-combined-ca-bundle\") pod \"keystone-db-sync-dks6t\" (UID: \"879d8ff7-8aac-4c43-999f-064029a8b7cf\") " pod="openstack/keystone-db-sync-dks6t" Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.045211 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p55cg\" (UniqueName: \"kubernetes.io/projected/879d8ff7-8aac-4c43-999f-064029a8b7cf-kube-api-access-p55cg\") pod \"keystone-db-sync-dks6t\" (UID: \"879d8ff7-8aac-4c43-999f-064029a8b7cf\") " pod="openstack/keystone-db-sync-dks6t" Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.046632 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-aa5a-account-create-update-44sfn" Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.053350 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/879d8ff7-8aac-4c43-999f-064029a8b7cf-config-data\") pod \"keystone-db-sync-dks6t\" (UID: \"879d8ff7-8aac-4c43-999f-064029a8b7cf\") " pod="openstack/keystone-db-sync-dks6t" Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.056963 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-hls9v" Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.063202 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p55cg\" (UniqueName: \"kubernetes.io/projected/879d8ff7-8aac-4c43-999f-064029a8b7cf-kube-api-access-p55cg\") pod \"keystone-db-sync-dks6t\" (UID: \"879d8ff7-8aac-4c43-999f-064029a8b7cf\") " pod="openstack/keystone-db-sync-dks6t" Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.065185 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/879d8ff7-8aac-4c43-999f-064029a8b7cf-combined-ca-bundle\") pod \"keystone-db-sync-dks6t\" (UID: \"879d8ff7-8aac-4c43-999f-064029a8b7cf\") " pod="openstack/keystone-db-sync-dks6t" Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.066600 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-hls9v" Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.082326 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dks6t" Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.091953 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-rf9j4"] Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.187842 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5613-account-create-update-9xhmx"] Dec 05 19:23:44 crc kubenswrapper[4828]: W1205 19:23:44.191016 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod398a387b_e59c_486c_a39c_a0e0f45c75a2.slice/crio-27fd6da24184231f0d7db3f5cfdc14a01e16a6fe49e10a9e67404749ff52bcd6 WatchSource:0}: Error finding container 27fd6da24184231f0d7db3f5cfdc14a01e16a6fe49e10a9e67404749ff52bcd6: Status 404 returned error can't find the container with id 27fd6da24184231f0d7db3f5cfdc14a01e16a6fe49e10a9e67404749ff52bcd6 Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.196004 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-6lkzn"] Dec 05 19:23:44 crc kubenswrapper[4828]: W1205 19:23:44.197516 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9628efae_96b7_43fb_a5cc_05279f664d77.slice/crio-ec93f2d8ef931047b48c7845acf0d8a4ebf2709c232c318bb8f560f4879b21a1 WatchSource:0}: Error finding container ec93f2d8ef931047b48c7845acf0d8a4ebf2709c232c318bb8f560f4879b21a1: Status 404 returned error can't find the container with id ec93f2d8ef931047b48c7845acf0d8a4ebf2709c232c318bb8f560f4879b21a1 Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.248271 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/065a4655-bbd3-4a8e-9daf-56c03db0e6c0-ovsdbserver-nb\") pod \"065a4655-bbd3-4a8e-9daf-56c03db0e6c0\" (UID: \"065a4655-bbd3-4a8e-9daf-56c03db0e6c0\") " Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.248572 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/065a4655-bbd3-4a8e-9daf-56c03db0e6c0-ovsdbserver-sb\") pod \"065a4655-bbd3-4a8e-9daf-56c03db0e6c0\" (UID: \"065a4655-bbd3-4a8e-9daf-56c03db0e6c0\") " Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.248593 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/065a4655-bbd3-4a8e-9daf-56c03db0e6c0-dns-svc\") pod \"065a4655-bbd3-4a8e-9daf-56c03db0e6c0\" (UID: \"065a4655-bbd3-4a8e-9daf-56c03db0e6c0\") " Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.248643 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065a4655-bbd3-4a8e-9daf-56c03db0e6c0-config\") pod \"065a4655-bbd3-4a8e-9daf-56c03db0e6c0\" (UID: \"065a4655-bbd3-4a8e-9daf-56c03db0e6c0\") " Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.248689 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvfk2\" (UniqueName: \"kubernetes.io/projected/065a4655-bbd3-4a8e-9daf-56c03db0e6c0-kube-api-access-bvfk2\") pod \"065a4655-bbd3-4a8e-9daf-56c03db0e6c0\" (UID: \"065a4655-bbd3-4a8e-9daf-56c03db0e6c0\") " Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.253231 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065a4655-bbd3-4a8e-9daf-56c03db0e6c0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "065a4655-bbd3-4a8e-9daf-56c03db0e6c0" (UID: "065a4655-bbd3-4a8e-9daf-56c03db0e6c0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.253531 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065a4655-bbd3-4a8e-9daf-56c03db0e6c0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "065a4655-bbd3-4a8e-9daf-56c03db0e6c0" (UID: "065a4655-bbd3-4a8e-9daf-56c03db0e6c0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.253768 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065a4655-bbd3-4a8e-9daf-56c03db0e6c0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "065a4655-bbd3-4a8e-9daf-56c03db0e6c0" (UID: "065a4655-bbd3-4a8e-9daf-56c03db0e6c0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.254032 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065a4655-bbd3-4a8e-9daf-56c03db0e6c0-config" (OuterVolumeSpecName: "config") pod "065a4655-bbd3-4a8e-9daf-56c03db0e6c0" (UID: "065a4655-bbd3-4a8e-9daf-56c03db0e6c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.254067 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/065a4655-bbd3-4a8e-9daf-56c03db0e6c0-kube-api-access-bvfk2" (OuterVolumeSpecName: "kube-api-access-bvfk2") pod "065a4655-bbd3-4a8e-9daf-56c03db0e6c0" (UID: "065a4655-bbd3-4a8e-9daf-56c03db0e6c0"). InnerVolumeSpecName "kube-api-access-bvfk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.351238 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/065a4655-bbd3-4a8e-9daf-56c03db0e6c0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.351272 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/065a4655-bbd3-4a8e-9daf-56c03db0e6c0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.351296 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/065a4655-bbd3-4a8e-9daf-56c03db0e6c0-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.351305 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065a4655-bbd3-4a8e-9daf-56c03db0e6c0-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.351314 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvfk2\" (UniqueName: \"kubernetes.io/projected/065a4655-bbd3-4a8e-9daf-56c03db0e6c0-kube-api-access-bvfk2\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.459644 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-vfkq9"] Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.465372 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b57a-account-create-update-xqckx"] Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.472933 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-fcdj6"] Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.676912 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-dks6t"] Dec 05 19:23:44 crc kubenswrapper[4828]: I1205 19:23:44.712560 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-aa5a-account-create-update-44sfn"] Dec 05 19:23:45 crc kubenswrapper[4828]: I1205 19:23:45.066346 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-aa5a-account-create-update-44sfn" event={"ID":"26009bb4-7abf-4522-94c2-e63a94f8c7cb","Type":"ContainerStarted","Data":"d568684a3d1026528a84d45cee12bc135f48f9c77728b4e2f4d1012ec93e3625"} Dec 05 19:23:45 crc kubenswrapper[4828]: I1205 19:23:45.066396 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-aa5a-account-create-update-44sfn" event={"ID":"26009bb4-7abf-4522-94c2-e63a94f8c7cb","Type":"ContainerStarted","Data":"37713368fb826f1e6ada14d010138efab659b8f2db658136065cebbe54584d5e"} Dec 05 19:23:45 crc kubenswrapper[4828]: I1205 19:23:45.070531 4828 generic.go:334] "Generic (PLEG): container finished" podID="c341eebe-b27c-4dee-bb8e-477cd913128b" containerID="8688d4944e7fa9ff38feedf119ac125c3f1f7c11a07f4f7661a47aaec7030503" exitCode=0 Dec 05 19:23:45 crc kubenswrapper[4828]: I1205 19:23:45.070596 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fcdj6" event={"ID":"c341eebe-b27c-4dee-bb8e-477cd913128b","Type":"ContainerDied","Data":"8688d4944e7fa9ff38feedf119ac125c3f1f7c11a07f4f7661a47aaec7030503"} Dec 05 19:23:45 crc kubenswrapper[4828]: I1205 19:23:45.070620 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fcdj6" event={"ID":"c341eebe-b27c-4dee-bb8e-477cd913128b","Type":"ContainerStarted","Data":"cfcb6b434ec4dd2fa2da2c92f4ff83584861484e8986e78ae05294dddf399571"} Dec 05 19:23:45 crc kubenswrapper[4828]: I1205 19:23:45.071740 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b57a-account-create-update-xqckx" event={"ID":"5bba95cd-b21d-4f44-b575-59527cf3b537","Type":"ContainerStarted","Data":"ac4fba861c6a76224b6bc1adbf0e709bc038d56f599e21f5f2f9543b24d3d152"} Dec 05 19:23:45 crc kubenswrapper[4828]: I1205 19:23:45.071759 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b57a-account-create-update-xqckx" event={"ID":"5bba95cd-b21d-4f44-b575-59527cf3b537","Type":"ContainerStarted","Data":"5375a7f1b8b88d382a0174c12285b211373ce4be841919c1b10ff4343d0c8e6c"} Dec 05 19:23:45 crc kubenswrapper[4828]: I1205 19:23:45.073026 4828 generic.go:334] "Generic (PLEG): container finished" podID="b3fd15a0-f362-4f74-bde6-0df71598dcc9" containerID="43e39dbe794c7e3dbda87b4580e5d78e097bd2d4c5a8b89437e49271b3e16cc8" exitCode=0 Dec 05 19:23:45 crc kubenswrapper[4828]: I1205 19:23:45.073072 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vfkq9" event={"ID":"b3fd15a0-f362-4f74-bde6-0df71598dcc9","Type":"ContainerDied","Data":"43e39dbe794c7e3dbda87b4580e5d78e097bd2d4c5a8b89437e49271b3e16cc8"} Dec 05 19:23:45 crc kubenswrapper[4828]: I1205 19:23:45.073095 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vfkq9" event={"ID":"b3fd15a0-f362-4f74-bde6-0df71598dcc9","Type":"ContainerStarted","Data":"30b667618ee46d9e1e51d6e67fe54686f27e08d423910502446241225768c2c3"} Dec 05 19:23:45 crc kubenswrapper[4828]: I1205 19:23:45.074790 4828 generic.go:334] "Generic (PLEG): container finished" podID="176021e9-3e79-43bb-9c96-54d69defaba1" containerID="35dac0a5fe66ec5dfdbc02ed5c5ec416cc28e2942c60f2c59ef6d04aff71ada6" exitCode=0 Dec 05 19:23:45 crc kubenswrapper[4828]: I1205 19:23:45.074845 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6lkzn" event={"ID":"176021e9-3e79-43bb-9c96-54d69defaba1","Type":"ContainerDied","Data":"35dac0a5fe66ec5dfdbc02ed5c5ec416cc28e2942c60f2c59ef6d04aff71ada6"} Dec 05 19:23:45 crc kubenswrapper[4828]: I1205 19:23:45.075014 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6lkzn" event={"ID":"176021e9-3e79-43bb-9c96-54d69defaba1","Type":"ContainerStarted","Data":"b00c535f6b2a8234a865a7cfb0a3d85d7449236d9798579c52783abffa4d030a"} Dec 05 19:23:45 crc kubenswrapper[4828]: I1205 19:23:45.075965 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dks6t" event={"ID":"879d8ff7-8aac-4c43-999f-064029a8b7cf","Type":"ContainerStarted","Data":"d9050548b9f75838da41b147e0754aeb94c8bf59e91dd3133c6865d5ec84a2fe"} Dec 05 19:23:45 crc kubenswrapper[4828]: I1205 19:23:45.079507 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5613-account-create-update-9xhmx" event={"ID":"9628efae-96b7-43fb-a5cc-05279f664d77","Type":"ContainerStarted","Data":"1c45b0e64d1cdd1b34456146d890d9bf6d450ef8cb9df61191b0ca02bc688b94"} Dec 05 19:23:45 crc kubenswrapper[4828]: I1205 19:23:45.079548 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5613-account-create-update-9xhmx" event={"ID":"9628efae-96b7-43fb-a5cc-05279f664d77","Type":"ContainerStarted","Data":"ec93f2d8ef931047b48c7845acf0d8a4ebf2709c232c318bb8f560f4879b21a1"} Dec 05 19:23:45 crc kubenswrapper[4828]: I1205 19:23:45.084101 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-aa5a-account-create-update-44sfn" podStartSLOduration=2.084087276 podStartE2EDuration="2.084087276s" podCreationTimestamp="2025-12-05 19:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:23:45.077018983 +0000 UTC m=+1202.972241299" watchObservedRunningTime="2025-12-05 19:23:45.084087276 +0000 UTC m=+1202.979309582" Dec 05 19:23:45 crc kubenswrapper[4828]: I1205 19:23:45.084833 4828 generic.go:334] "Generic (PLEG): container finished" podID="398a387b-e59c-486c-a39c-a0e0f45c75a2" containerID="8579c88a9420a6586f124c7f2f06c91c3d09e47fdc646e3ff0320f073edd0dc9" exitCode=0 Dec 05 19:23:45 crc kubenswrapper[4828]: I1205 19:23:45.084915 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-hls9v" Dec 05 19:23:45 crc kubenswrapper[4828]: I1205 19:23:45.084972 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" event={"ID":"398a387b-e59c-486c-a39c-a0e0f45c75a2","Type":"ContainerDied","Data":"8579c88a9420a6586f124c7f2f06c91c3d09e47fdc646e3ff0320f073edd0dc9"} Dec 05 19:23:45 crc kubenswrapper[4828]: I1205 19:23:45.085037 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" event={"ID":"398a387b-e59c-486c-a39c-a0e0f45c75a2","Type":"ContainerStarted","Data":"27fd6da24184231f0d7db3f5cfdc14a01e16a6fe49e10a9e67404749ff52bcd6"} Dec 05 19:23:45 crc kubenswrapper[4828]: I1205 19:23:45.136667 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b57a-account-create-update-xqckx" podStartSLOduration=2.136648396 podStartE2EDuration="2.136648396s" podCreationTimestamp="2025-12-05 19:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:23:45.123107106 +0000 UTC m=+1203.018329412" watchObservedRunningTime="2025-12-05 19:23:45.136648396 +0000 UTC m=+1203.031870702" Dec 05 19:23:45 crc kubenswrapper[4828]: I1205 19:23:45.532078 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-hls9v"] Dec 05 19:23:45 crc kubenswrapper[4828]: I1205 19:23:45.539008 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-hls9v"] Dec 05 19:23:46 crc kubenswrapper[4828]: I1205 19:23:46.093739 4828 generic.go:334] "Generic (PLEG): container finished" podID="5bba95cd-b21d-4f44-b575-59527cf3b537" containerID="ac4fba861c6a76224b6bc1adbf0e709bc038d56f599e21f5f2f9543b24d3d152" exitCode=0 Dec 05 19:23:46 crc kubenswrapper[4828]: I1205 19:23:46.093797 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b57a-account-create-update-xqckx" event={"ID":"5bba95cd-b21d-4f44-b575-59527cf3b537","Type":"ContainerDied","Data":"ac4fba861c6a76224b6bc1adbf0e709bc038d56f599e21f5f2f9543b24d3d152"} Dec 05 19:23:46 crc kubenswrapper[4828]: I1205 19:23:46.095716 4828 generic.go:334] "Generic (PLEG): container finished" podID="9628efae-96b7-43fb-a5cc-05279f664d77" containerID="1c45b0e64d1cdd1b34456146d890d9bf6d450ef8cb9df61191b0ca02bc688b94" exitCode=0 Dec 05 19:23:46 crc kubenswrapper[4828]: I1205 19:23:46.095789 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5613-account-create-update-9xhmx" event={"ID":"9628efae-96b7-43fb-a5cc-05279f664d77","Type":"ContainerDied","Data":"1c45b0e64d1cdd1b34456146d890d9bf6d450ef8cb9df61191b0ca02bc688b94"} Dec 05 19:23:46 crc kubenswrapper[4828]: I1205 19:23:46.098469 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" event={"ID":"398a387b-e59c-486c-a39c-a0e0f45c75a2","Type":"ContainerStarted","Data":"37bb34b2e7a9f1094c4687bc1df6e79504acda85f6285808deea147aabb59295"} Dec 05 19:23:46 crc kubenswrapper[4828]: I1205 19:23:46.098578 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" Dec 05 19:23:46 crc kubenswrapper[4828]: I1205 19:23:46.100095 4828 generic.go:334] "Generic (PLEG): container finished" podID="26009bb4-7abf-4522-94c2-e63a94f8c7cb" containerID="d568684a3d1026528a84d45cee12bc135f48f9c77728b4e2f4d1012ec93e3625" exitCode=0 Dec 05 19:23:46 crc kubenswrapper[4828]: I1205 19:23:46.100343 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-aa5a-account-create-update-44sfn" event={"ID":"26009bb4-7abf-4522-94c2-e63a94f8c7cb","Type":"ContainerDied","Data":"d568684a3d1026528a84d45cee12bc135f48f9c77728b4e2f4d1012ec93e3625"} Dec 05 19:23:46 crc kubenswrapper[4828]: I1205 19:23:46.141006 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" podStartSLOduration=3.140980407 podStartE2EDuration="3.140980407s" podCreationTimestamp="2025-12-05 19:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:23:46.132060983 +0000 UTC m=+1204.027283289" watchObservedRunningTime="2025-12-05 19:23:46.140980407 +0000 UTC m=+1204.036202753" Dec 05 19:23:46 crc kubenswrapper[4828]: I1205 19:23:46.460744 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="065a4655-bbd3-4a8e-9daf-56c03db0e6c0" path="/var/lib/kubelet/pods/065a4655-bbd3-4a8e-9daf-56c03db0e6c0/volumes" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.018074 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vfkq9" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.024050 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5613-account-create-update-9xhmx" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.029817 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fcdj6" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.040082 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6lkzn" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.069172 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b57a-account-create-update-xqckx" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.074515 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3fd15a0-f362-4f74-bde6-0df71598dcc9-operator-scripts\") pod \"b3fd15a0-f362-4f74-bde6-0df71598dcc9\" (UID: \"b3fd15a0-f362-4f74-bde6-0df71598dcc9\") " Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.074743 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtwv7\" (UniqueName: \"kubernetes.io/projected/b3fd15a0-f362-4f74-bde6-0df71598dcc9-kube-api-access-xtwv7\") pod \"b3fd15a0-f362-4f74-bde6-0df71598dcc9\" (UID: \"b3fd15a0-f362-4f74-bde6-0df71598dcc9\") " Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.075955 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3fd15a0-f362-4f74-bde6-0df71598dcc9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b3fd15a0-f362-4f74-bde6-0df71598dcc9" (UID: "b3fd15a0-f362-4f74-bde6-0df71598dcc9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.077274 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-aa5a-account-create-update-44sfn" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.103598 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3fd15a0-f362-4f74-bde6-0df71598dcc9-kube-api-access-xtwv7" (OuterVolumeSpecName: "kube-api-access-xtwv7") pod "b3fd15a0-f362-4f74-bde6-0df71598dcc9" (UID: "b3fd15a0-f362-4f74-bde6-0df71598dcc9"). InnerVolumeSpecName "kube-api-access-xtwv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.152433 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5613-account-create-update-9xhmx" event={"ID":"9628efae-96b7-43fb-a5cc-05279f664d77","Type":"ContainerDied","Data":"ec93f2d8ef931047b48c7845acf0d8a4ebf2709c232c318bb8f560f4879b21a1"} Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.152483 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec93f2d8ef931047b48c7845acf0d8a4ebf2709c232c318bb8f560f4879b21a1" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.152576 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5613-account-create-update-9xhmx" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.154679 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-aa5a-account-create-update-44sfn" event={"ID":"26009bb4-7abf-4522-94c2-e63a94f8c7cb","Type":"ContainerDied","Data":"37713368fb826f1e6ada14d010138efab659b8f2db658136065cebbe54584d5e"} Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.154726 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37713368fb826f1e6ada14d010138efab659b8f2db658136065cebbe54584d5e" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.155215 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-aa5a-account-create-update-44sfn" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.156199 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fcdj6" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.156594 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fcdj6" event={"ID":"c341eebe-b27c-4dee-bb8e-477cd913128b","Type":"ContainerDied","Data":"cfcb6b434ec4dd2fa2da2c92f4ff83584861484e8986e78ae05294dddf399571"} Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.156620 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfcb6b434ec4dd2fa2da2c92f4ff83584861484e8986e78ae05294dddf399571" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.158409 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b57a-account-create-update-xqckx" event={"ID":"5bba95cd-b21d-4f44-b575-59527cf3b537","Type":"ContainerDied","Data":"5375a7f1b8b88d382a0174c12285b211373ce4be841919c1b10ff4343d0c8e6c"} Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.158481 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5375a7f1b8b88d382a0174c12285b211373ce4be841919c1b10ff4343d0c8e6c" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.158422 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b57a-account-create-update-xqckx" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.160668 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vfkq9" event={"ID":"b3fd15a0-f362-4f74-bde6-0df71598dcc9","Type":"ContainerDied","Data":"30b667618ee46d9e1e51d6e67fe54686f27e08d423910502446241225768c2c3"} Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.160786 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30b667618ee46d9e1e51d6e67fe54686f27e08d423910502446241225768c2c3" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.160944 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vfkq9" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.162477 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6lkzn" event={"ID":"176021e9-3e79-43bb-9c96-54d69defaba1","Type":"ContainerDied","Data":"b00c535f6b2a8234a865a7cfb0a3d85d7449236d9798579c52783abffa4d030a"} Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.162523 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b00c535f6b2a8234a865a7cfb0a3d85d7449236d9798579c52783abffa4d030a" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.162560 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6lkzn" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.176697 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxclm\" (UniqueName: \"kubernetes.io/projected/9628efae-96b7-43fb-a5cc-05279f664d77-kube-api-access-rxclm\") pod \"9628efae-96b7-43fb-a5cc-05279f664d77\" (UID: \"9628efae-96b7-43fb-a5cc-05279f664d77\") " Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.176906 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/176021e9-3e79-43bb-9c96-54d69defaba1-operator-scripts\") pod \"176021e9-3e79-43bb-9c96-54d69defaba1\" (UID: \"176021e9-3e79-43bb-9c96-54d69defaba1\") " Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.177007 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bba95cd-b21d-4f44-b575-59527cf3b537-operator-scripts\") pod \"5bba95cd-b21d-4f44-b575-59527cf3b537\" (UID: \"5bba95cd-b21d-4f44-b575-59527cf3b537\") " Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.177119 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsd5s\" (UniqueName: \"kubernetes.io/projected/c341eebe-b27c-4dee-bb8e-477cd913128b-kube-api-access-gsd5s\") pod \"c341eebe-b27c-4dee-bb8e-477cd913128b\" (UID: \"c341eebe-b27c-4dee-bb8e-477cd913128b\") " Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.177203 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g576w\" (UniqueName: \"kubernetes.io/projected/26009bb4-7abf-4522-94c2-e63a94f8c7cb-kube-api-access-g576w\") pod \"26009bb4-7abf-4522-94c2-e63a94f8c7cb\" (UID: \"26009bb4-7abf-4522-94c2-e63a94f8c7cb\") " Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.177428 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/176021e9-3e79-43bb-9c96-54d69defaba1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "176021e9-3e79-43bb-9c96-54d69defaba1" (UID: "176021e9-3e79-43bb-9c96-54d69defaba1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.177588 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bba95cd-b21d-4f44-b575-59527cf3b537-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5bba95cd-b21d-4f44-b575-59527cf3b537" (UID: "5bba95cd-b21d-4f44-b575-59527cf3b537"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.178091 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvfr8\" (UniqueName: \"kubernetes.io/projected/176021e9-3e79-43bb-9c96-54d69defaba1-kube-api-access-zvfr8\") pod \"176021e9-3e79-43bb-9c96-54d69defaba1\" (UID: \"176021e9-3e79-43bb-9c96-54d69defaba1\") " Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.178166 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmw66\" (UniqueName: \"kubernetes.io/projected/5bba95cd-b21d-4f44-b575-59527cf3b537-kube-api-access-tmw66\") pod \"5bba95cd-b21d-4f44-b575-59527cf3b537\" (UID: \"5bba95cd-b21d-4f44-b575-59527cf3b537\") " Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.178288 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26009bb4-7abf-4522-94c2-e63a94f8c7cb-operator-scripts\") pod \"26009bb4-7abf-4522-94c2-e63a94f8c7cb\" (UID: \"26009bb4-7abf-4522-94c2-e63a94f8c7cb\") " Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.178324 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9628efae-96b7-43fb-a5cc-05279f664d77-operator-scripts\") pod \"9628efae-96b7-43fb-a5cc-05279f664d77\" (UID: \"9628efae-96b7-43fb-a5cc-05279f664d77\") " Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.178376 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c341eebe-b27c-4dee-bb8e-477cd913128b-operator-scripts\") pod \"c341eebe-b27c-4dee-bb8e-477cd913128b\" (UID: \"c341eebe-b27c-4dee-bb8e-477cd913128b\") " Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.178890 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c341eebe-b27c-4dee-bb8e-477cd913128b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c341eebe-b27c-4dee-bb8e-477cd913128b" (UID: "c341eebe-b27c-4dee-bb8e-477cd913128b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.178909 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26009bb4-7abf-4522-94c2-e63a94f8c7cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26009bb4-7abf-4522-94c2-e63a94f8c7cb" (UID: "26009bb4-7abf-4522-94c2-e63a94f8c7cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.179013 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26009bb4-7abf-4522-94c2-e63a94f8c7cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.179029 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c341eebe-b27c-4dee-bb8e-477cd913128b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.179038 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/176021e9-3e79-43bb-9c96-54d69defaba1-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.179048 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bba95cd-b21d-4f44-b575-59527cf3b537-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.179058 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtwv7\" (UniqueName: \"kubernetes.io/projected/b3fd15a0-f362-4f74-bde6-0df71598dcc9-kube-api-access-xtwv7\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.179069 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3fd15a0-f362-4f74-bde6-0df71598dcc9-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.179555 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9628efae-96b7-43fb-a5cc-05279f664d77-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9628efae-96b7-43fb-a5cc-05279f664d77" (UID: "9628efae-96b7-43fb-a5cc-05279f664d77"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.180420 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9628efae-96b7-43fb-a5cc-05279f664d77-kube-api-access-rxclm" (OuterVolumeSpecName: "kube-api-access-rxclm") pod "9628efae-96b7-43fb-a5cc-05279f664d77" (UID: "9628efae-96b7-43fb-a5cc-05279f664d77"). InnerVolumeSpecName "kube-api-access-rxclm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.181134 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c341eebe-b27c-4dee-bb8e-477cd913128b-kube-api-access-gsd5s" (OuterVolumeSpecName: "kube-api-access-gsd5s") pod "c341eebe-b27c-4dee-bb8e-477cd913128b" (UID: "c341eebe-b27c-4dee-bb8e-477cd913128b"). InnerVolumeSpecName "kube-api-access-gsd5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.183872 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/176021e9-3e79-43bb-9c96-54d69defaba1-kube-api-access-zvfr8" (OuterVolumeSpecName: "kube-api-access-zvfr8") pod "176021e9-3e79-43bb-9c96-54d69defaba1" (UID: "176021e9-3e79-43bb-9c96-54d69defaba1"). InnerVolumeSpecName "kube-api-access-zvfr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.184001 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26009bb4-7abf-4522-94c2-e63a94f8c7cb-kube-api-access-g576w" (OuterVolumeSpecName: "kube-api-access-g576w") pod "26009bb4-7abf-4522-94c2-e63a94f8c7cb" (UID: "26009bb4-7abf-4522-94c2-e63a94f8c7cb"). InnerVolumeSpecName "kube-api-access-g576w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.199422 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bba95cd-b21d-4f44-b575-59527cf3b537-kube-api-access-tmw66" (OuterVolumeSpecName: "kube-api-access-tmw66") pod "5bba95cd-b21d-4f44-b575-59527cf3b537" (UID: "5bba95cd-b21d-4f44-b575-59527cf3b537"). InnerVolumeSpecName "kube-api-access-tmw66". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.280647 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmw66\" (UniqueName: \"kubernetes.io/projected/5bba95cd-b21d-4f44-b575-59527cf3b537-kube-api-access-tmw66\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.280695 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9628efae-96b7-43fb-a5cc-05279f664d77-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.280712 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxclm\" (UniqueName: \"kubernetes.io/projected/9628efae-96b7-43fb-a5cc-05279f664d77-kube-api-access-rxclm\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.280723 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsd5s\" (UniqueName: \"kubernetes.io/projected/c341eebe-b27c-4dee-bb8e-477cd913128b-kube-api-access-gsd5s\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.280738 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g576w\" (UniqueName: \"kubernetes.io/projected/26009bb4-7abf-4522-94c2-e63a94f8c7cb-kube-api-access-g576w\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:49 crc kubenswrapper[4828]: I1205 19:23:49.280748 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvfr8\" (UniqueName: \"kubernetes.io/projected/176021e9-3e79-43bb-9c96-54d69defaba1-kube-api-access-zvfr8\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:50 crc kubenswrapper[4828]: I1205 19:23:50.173083 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dks6t" event={"ID":"879d8ff7-8aac-4c43-999f-064029a8b7cf","Type":"ContainerStarted","Data":"07551b75d08158b4fcfd49746b8f3e1fb6b9b91f8dcc4e6be101a999c173796c"} Dec 05 19:23:50 crc kubenswrapper[4828]: I1205 19:23:50.192534 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-dks6t" podStartSLOduration=2.996962611 podStartE2EDuration="7.192510659s" podCreationTimestamp="2025-12-05 19:23:43 +0000 UTC" firstStartedPulling="2025-12-05 19:23:44.678315257 +0000 UTC m=+1202.573537563" lastFinishedPulling="2025-12-05 19:23:48.873863295 +0000 UTC m=+1206.769085611" observedRunningTime="2025-12-05 19:23:50.186847514 +0000 UTC m=+1208.082069830" watchObservedRunningTime="2025-12-05 19:23:50.192510659 +0000 UTC m=+1208.087732965" Dec 05 19:23:53 crc kubenswrapper[4828]: I1205 19:23:53.201105 4828 generic.go:334] "Generic (PLEG): container finished" podID="879d8ff7-8aac-4c43-999f-064029a8b7cf" containerID="07551b75d08158b4fcfd49746b8f3e1fb6b9b91f8dcc4e6be101a999c173796c" exitCode=0 Dec 05 19:23:53 crc kubenswrapper[4828]: I1205 19:23:53.201334 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dks6t" event={"ID":"879d8ff7-8aac-4c43-999f-064029a8b7cf","Type":"ContainerDied","Data":"07551b75d08158b4fcfd49746b8f3e1fb6b9b91f8dcc4e6be101a999c173796c"} Dec 05 19:23:53 crc kubenswrapper[4828]: I1205 19:23:53.502146 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" Dec 05 19:23:53 crc kubenswrapper[4828]: I1205 19:23:53.565366 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-rd9jv"] Dec 05 19:23:53 crc kubenswrapper[4828]: I1205 19:23:53.565627 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-rd9jv" podUID="b0969276-79d6-4176-9211-af61074920b1" containerName="dnsmasq-dns" containerID="cri-o://2d0bbe00eb47faeb7479bc21ef0dd400b841459a708728be51a22df698c26d21" gracePeriod=10 Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.060742 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-rd9jv" Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.161510 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0969276-79d6-4176-9211-af61074920b1-config\") pod \"b0969276-79d6-4176-9211-af61074920b1\" (UID: \"b0969276-79d6-4176-9211-af61074920b1\") " Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.161915 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0969276-79d6-4176-9211-af61074920b1-dns-svc\") pod \"b0969276-79d6-4176-9211-af61074920b1\" (UID: \"b0969276-79d6-4176-9211-af61074920b1\") " Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.162112 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j79jf\" (UniqueName: \"kubernetes.io/projected/b0969276-79d6-4176-9211-af61074920b1-kube-api-access-j79jf\") pod \"b0969276-79d6-4176-9211-af61074920b1\" (UID: \"b0969276-79d6-4176-9211-af61074920b1\") " Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.162258 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0969276-79d6-4176-9211-af61074920b1-ovsdbserver-nb\") pod \"b0969276-79d6-4176-9211-af61074920b1\" (UID: \"b0969276-79d6-4176-9211-af61074920b1\") " Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.162378 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0969276-79d6-4176-9211-af61074920b1-ovsdbserver-sb\") pod \"b0969276-79d6-4176-9211-af61074920b1\" (UID: \"b0969276-79d6-4176-9211-af61074920b1\") " Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.174208 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0969276-79d6-4176-9211-af61074920b1-kube-api-access-j79jf" (OuterVolumeSpecName: "kube-api-access-j79jf") pod "b0969276-79d6-4176-9211-af61074920b1" (UID: "b0969276-79d6-4176-9211-af61074920b1"). InnerVolumeSpecName "kube-api-access-j79jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.210815 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0969276-79d6-4176-9211-af61074920b1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b0969276-79d6-4176-9211-af61074920b1" (UID: "b0969276-79d6-4176-9211-af61074920b1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.214768 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0969276-79d6-4176-9211-af61074920b1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b0969276-79d6-4176-9211-af61074920b1" (UID: "b0969276-79d6-4176-9211-af61074920b1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.215558 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0969276-79d6-4176-9211-af61074920b1-config" (OuterVolumeSpecName: "config") pod "b0969276-79d6-4176-9211-af61074920b1" (UID: "b0969276-79d6-4176-9211-af61074920b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.216106 4828 generic.go:334] "Generic (PLEG): container finished" podID="b0969276-79d6-4176-9211-af61074920b1" containerID="2d0bbe00eb47faeb7479bc21ef0dd400b841459a708728be51a22df698c26d21" exitCode=0 Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.216289 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-rd9jv" Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.216844 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rd9jv" event={"ID":"b0969276-79d6-4176-9211-af61074920b1","Type":"ContainerDied","Data":"2d0bbe00eb47faeb7479bc21ef0dd400b841459a708728be51a22df698c26d21"} Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.216875 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rd9jv" event={"ID":"b0969276-79d6-4176-9211-af61074920b1","Type":"ContainerDied","Data":"c56f794452e21a624be952df009552d5e0142283f09830d851d7c78a2801abd0"} Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.216891 4828 scope.go:117] "RemoveContainer" containerID="2d0bbe00eb47faeb7479bc21ef0dd400b841459a708728be51a22df698c26d21" Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.221864 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0969276-79d6-4176-9211-af61074920b1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b0969276-79d6-4176-9211-af61074920b1" (UID: "b0969276-79d6-4176-9211-af61074920b1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.275279 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0969276-79d6-4176-9211-af61074920b1-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.275336 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j79jf\" (UniqueName: \"kubernetes.io/projected/b0969276-79d6-4176-9211-af61074920b1-kube-api-access-j79jf\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.275350 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0969276-79d6-4176-9211-af61074920b1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.275361 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0969276-79d6-4176-9211-af61074920b1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.275376 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0969276-79d6-4176-9211-af61074920b1-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.284160 4828 scope.go:117] "RemoveContainer" containerID="dca3c9e055b5bf22d9c0c927686af5ab716d562b94ac222c8737863bc94a33e1" Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.307739 4828 scope.go:117] "RemoveContainer" containerID="2d0bbe00eb47faeb7479bc21ef0dd400b841459a708728be51a22df698c26d21" Dec 05 19:23:54 crc kubenswrapper[4828]: E1205 19:23:54.308126 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d0bbe00eb47faeb7479bc21ef0dd400b841459a708728be51a22df698c26d21\": container with ID starting with 2d0bbe00eb47faeb7479bc21ef0dd400b841459a708728be51a22df698c26d21 not found: ID does not exist" containerID="2d0bbe00eb47faeb7479bc21ef0dd400b841459a708728be51a22df698c26d21" Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.308154 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d0bbe00eb47faeb7479bc21ef0dd400b841459a708728be51a22df698c26d21"} err="failed to get container status \"2d0bbe00eb47faeb7479bc21ef0dd400b841459a708728be51a22df698c26d21\": rpc error: code = NotFound desc = could not find container \"2d0bbe00eb47faeb7479bc21ef0dd400b841459a708728be51a22df698c26d21\": container with ID starting with 2d0bbe00eb47faeb7479bc21ef0dd400b841459a708728be51a22df698c26d21 not found: ID does not exist" Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.308175 4828 scope.go:117] "RemoveContainer" containerID="dca3c9e055b5bf22d9c0c927686af5ab716d562b94ac222c8737863bc94a33e1" Dec 05 19:23:54 crc kubenswrapper[4828]: E1205 19:23:54.308496 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dca3c9e055b5bf22d9c0c927686af5ab716d562b94ac222c8737863bc94a33e1\": container with ID starting with dca3c9e055b5bf22d9c0c927686af5ab716d562b94ac222c8737863bc94a33e1 not found: ID does not exist" containerID="dca3c9e055b5bf22d9c0c927686af5ab716d562b94ac222c8737863bc94a33e1" Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.308511 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dca3c9e055b5bf22d9c0c927686af5ab716d562b94ac222c8737863bc94a33e1"} err="failed to get container status \"dca3c9e055b5bf22d9c0c927686af5ab716d562b94ac222c8737863bc94a33e1\": rpc error: code = NotFound desc = could not find container \"dca3c9e055b5bf22d9c0c927686af5ab716d562b94ac222c8737863bc94a33e1\": container with ID starting with dca3c9e055b5bf22d9c0c927686af5ab716d562b94ac222c8737863bc94a33e1 not found: ID does not exist" Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.489938 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dks6t" Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.545627 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-rd9jv"] Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.554993 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-rd9jv"] Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.581412 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/879d8ff7-8aac-4c43-999f-064029a8b7cf-config-data\") pod \"879d8ff7-8aac-4c43-999f-064029a8b7cf\" (UID: \"879d8ff7-8aac-4c43-999f-064029a8b7cf\") " Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.581460 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p55cg\" (UniqueName: \"kubernetes.io/projected/879d8ff7-8aac-4c43-999f-064029a8b7cf-kube-api-access-p55cg\") pod \"879d8ff7-8aac-4c43-999f-064029a8b7cf\" (UID: \"879d8ff7-8aac-4c43-999f-064029a8b7cf\") " Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.581574 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/879d8ff7-8aac-4c43-999f-064029a8b7cf-combined-ca-bundle\") pod \"879d8ff7-8aac-4c43-999f-064029a8b7cf\" (UID: \"879d8ff7-8aac-4c43-999f-064029a8b7cf\") " Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.584999 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/879d8ff7-8aac-4c43-999f-064029a8b7cf-kube-api-access-p55cg" (OuterVolumeSpecName: "kube-api-access-p55cg") pod "879d8ff7-8aac-4c43-999f-064029a8b7cf" (UID: "879d8ff7-8aac-4c43-999f-064029a8b7cf"). InnerVolumeSpecName "kube-api-access-p55cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.604643 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/879d8ff7-8aac-4c43-999f-064029a8b7cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "879d8ff7-8aac-4c43-999f-064029a8b7cf" (UID: "879d8ff7-8aac-4c43-999f-064029a8b7cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.619932 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/879d8ff7-8aac-4c43-999f-064029a8b7cf-config-data" (OuterVolumeSpecName: "config-data") pod "879d8ff7-8aac-4c43-999f-064029a8b7cf" (UID: "879d8ff7-8aac-4c43-999f-064029a8b7cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.684254 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/879d8ff7-8aac-4c43-999f-064029a8b7cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.684311 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/879d8ff7-8aac-4c43-999f-064029a8b7cf-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:54 crc kubenswrapper[4828]: I1205 19:23:54.684329 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p55cg\" (UniqueName: \"kubernetes.io/projected/879d8ff7-8aac-4c43-999f-064029a8b7cf-kube-api-access-p55cg\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.228807 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dks6t" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.228864 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dks6t" event={"ID":"879d8ff7-8aac-4c43-999f-064029a8b7cf","Type":"ContainerDied","Data":"d9050548b9f75838da41b147e0754aeb94c8bf59e91dd3133c6865d5ec84a2fe"} Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.228942 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9050548b9f75838da41b147e0754aeb94c8bf59e91dd3133c6865d5ec84a2fe" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.509942 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-7p8cq"] Dec 05 19:23:55 crc kubenswrapper[4828]: E1205 19:23:55.510368 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c341eebe-b27c-4dee-bb8e-477cd913128b" containerName="mariadb-database-create" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.510390 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="c341eebe-b27c-4dee-bb8e-477cd913128b" containerName="mariadb-database-create" Dec 05 19:23:55 crc kubenswrapper[4828]: E1205 19:23:55.510424 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0969276-79d6-4176-9211-af61074920b1" containerName="init" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.510434 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0969276-79d6-4176-9211-af61074920b1" containerName="init" Dec 05 19:23:55 crc kubenswrapper[4828]: E1205 19:23:55.510453 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9628efae-96b7-43fb-a5cc-05279f664d77" containerName="mariadb-account-create-update" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.510461 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="9628efae-96b7-43fb-a5cc-05279f664d77" containerName="mariadb-account-create-update" Dec 05 19:23:55 crc kubenswrapper[4828]: E1205 19:23:55.510470 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="879d8ff7-8aac-4c43-999f-064029a8b7cf" containerName="keystone-db-sync" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.510477 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="879d8ff7-8aac-4c43-999f-064029a8b7cf" containerName="keystone-db-sync" Dec 05 19:23:55 crc kubenswrapper[4828]: E1205 19:23:55.510530 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26009bb4-7abf-4522-94c2-e63a94f8c7cb" containerName="mariadb-account-create-update" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.510540 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="26009bb4-7abf-4522-94c2-e63a94f8c7cb" containerName="mariadb-account-create-update" Dec 05 19:23:55 crc kubenswrapper[4828]: E1205 19:23:55.510552 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bba95cd-b21d-4f44-b575-59527cf3b537" containerName="mariadb-account-create-update" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.510560 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bba95cd-b21d-4f44-b575-59527cf3b537" containerName="mariadb-account-create-update" Dec 05 19:23:55 crc kubenswrapper[4828]: E1205 19:23:55.510578 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0969276-79d6-4176-9211-af61074920b1" containerName="dnsmasq-dns" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.510587 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0969276-79d6-4176-9211-af61074920b1" containerName="dnsmasq-dns" Dec 05 19:23:55 crc kubenswrapper[4828]: E1205 19:23:55.510602 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3fd15a0-f362-4f74-bde6-0df71598dcc9" containerName="mariadb-database-create" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.510609 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3fd15a0-f362-4f74-bde6-0df71598dcc9" containerName="mariadb-database-create" Dec 05 19:23:55 crc kubenswrapper[4828]: E1205 19:23:55.510632 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="176021e9-3e79-43bb-9c96-54d69defaba1" containerName="mariadb-database-create" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.510640 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="176021e9-3e79-43bb-9c96-54d69defaba1" containerName="mariadb-database-create" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.510847 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="176021e9-3e79-43bb-9c96-54d69defaba1" containerName="mariadb-database-create" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.510870 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="c341eebe-b27c-4dee-bb8e-477cd913128b" containerName="mariadb-database-create" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.510883 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="9628efae-96b7-43fb-a5cc-05279f664d77" containerName="mariadb-account-create-update" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.510893 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="26009bb4-7abf-4522-94c2-e63a94f8c7cb" containerName="mariadb-account-create-update" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.510903 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bba95cd-b21d-4f44-b575-59527cf3b537" containerName="mariadb-account-create-update" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.510913 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3fd15a0-f362-4f74-bde6-0df71598dcc9" containerName="mariadb-database-create" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.510922 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0969276-79d6-4176-9211-af61074920b1" containerName="dnsmasq-dns" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.510934 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="879d8ff7-8aac-4c43-999f-064029a8b7cf" containerName="keystone-db-sync" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.511744 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7p8cq" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.515299 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.515385 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.515653 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.515720 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7rdh4" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.515845 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.526943 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7p8cq"] Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.578041 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-jgz4l"] Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.579350 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-jgz4l" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.593087 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-jgz4l"] Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.637420 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4e0e250-719c-4202-b157-e8af7f3a4441-config-data\") pod \"keystone-bootstrap-7p8cq\" (UID: \"a4e0e250-719c-4202-b157-e8af7f3a4441\") " pod="openstack/keystone-bootstrap-7p8cq" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.638282 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4e0e250-719c-4202-b157-e8af7f3a4441-combined-ca-bundle\") pod \"keystone-bootstrap-7p8cq\" (UID: \"a4e0e250-719c-4202-b157-e8af7f3a4441\") " pod="openstack/keystone-bootstrap-7p8cq" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.638396 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4e0e250-719c-4202-b157-e8af7f3a4441-scripts\") pod \"keystone-bootstrap-7p8cq\" (UID: \"a4e0e250-719c-4202-b157-e8af7f3a4441\") " pod="openstack/keystone-bootstrap-7p8cq" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.638441 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a4e0e250-719c-4202-b157-e8af7f3a4441-credential-keys\") pod \"keystone-bootstrap-7p8cq\" (UID: \"a4e0e250-719c-4202-b157-e8af7f3a4441\") " pod="openstack/keystone-bootstrap-7p8cq" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.638507 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a4e0e250-719c-4202-b157-e8af7f3a4441-fernet-keys\") pod \"keystone-bootstrap-7p8cq\" (UID: \"a4e0e250-719c-4202-b157-e8af7f3a4441\") " pod="openstack/keystone-bootstrap-7p8cq" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.638592 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68hdx\" (UniqueName: \"kubernetes.io/projected/a4e0e250-719c-4202-b157-e8af7f3a4441-kube-api-access-68hdx\") pod \"keystone-bootstrap-7p8cq\" (UID: \"a4e0e250-719c-4202-b157-e8af7f3a4441\") " pod="openstack/keystone-bootstrap-7p8cq" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.726998 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-86bd78dcb9-8zs7w"] Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.728804 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86bd78dcb9-8zs7w" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.734705 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.734742 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-475km" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.735041 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.737423 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.739787 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17b32c97-cacb-4398-a4f9-7149fcfa178d-dns-svc\") pod \"dnsmasq-dns-847c4cc679-jgz4l\" (UID: \"17b32c97-cacb-4398-a4f9-7149fcfa178d\") " pod="openstack/dnsmasq-dns-847c4cc679-jgz4l" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.739856 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68hdx\" (UniqueName: \"kubernetes.io/projected/a4e0e250-719c-4202-b157-e8af7f3a4441-kube-api-access-68hdx\") pod \"keystone-bootstrap-7p8cq\" (UID: \"a4e0e250-719c-4202-b157-e8af7f3a4441\") " pod="openstack/keystone-bootstrap-7p8cq" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.739892 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/097fc905-c9d4-49b9-a454-2aaa1b8ad22f-horizon-secret-key\") pod \"horizon-86bd78dcb9-8zs7w\" (UID: \"097fc905-c9d4-49b9-a454-2aaa1b8ad22f\") " pod="openstack/horizon-86bd78dcb9-8zs7w" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.739926 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17b32c97-cacb-4398-a4f9-7149fcfa178d-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-jgz4l\" (UID: \"17b32c97-cacb-4398-a4f9-7149fcfa178d\") " pod="openstack/dnsmasq-dns-847c4cc679-jgz4l" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.739960 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mzq5\" (UniqueName: \"kubernetes.io/projected/097fc905-c9d4-49b9-a454-2aaa1b8ad22f-kube-api-access-6mzq5\") pod \"horizon-86bd78dcb9-8zs7w\" (UID: \"097fc905-c9d4-49b9-a454-2aaa1b8ad22f\") " pod="openstack/horizon-86bd78dcb9-8zs7w" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.739985 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17b32c97-cacb-4398-a4f9-7149fcfa178d-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-jgz4l\" (UID: \"17b32c97-cacb-4398-a4f9-7149fcfa178d\") " pod="openstack/dnsmasq-dns-847c4cc679-jgz4l" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.740024 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4e0e250-719c-4202-b157-e8af7f3a4441-config-data\") pod \"keystone-bootstrap-7p8cq\" (UID: \"a4e0e250-719c-4202-b157-e8af7f3a4441\") " pod="openstack/keystone-bootstrap-7p8cq" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.740812 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-86bd78dcb9-8zs7w"] Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.745246 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-257b9\" (UniqueName: \"kubernetes.io/projected/17b32c97-cacb-4398-a4f9-7149fcfa178d-kube-api-access-257b9\") pod \"dnsmasq-dns-847c4cc679-jgz4l\" (UID: \"17b32c97-cacb-4398-a4f9-7149fcfa178d\") " pod="openstack/dnsmasq-dns-847c4cc679-jgz4l" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.745306 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4e0e250-719c-4202-b157-e8af7f3a4441-combined-ca-bundle\") pod \"keystone-bootstrap-7p8cq\" (UID: \"a4e0e250-719c-4202-b157-e8af7f3a4441\") " pod="openstack/keystone-bootstrap-7p8cq" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.745356 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17b32c97-cacb-4398-a4f9-7149fcfa178d-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-jgz4l\" (UID: \"17b32c97-cacb-4398-a4f9-7149fcfa178d\") " pod="openstack/dnsmasq-dns-847c4cc679-jgz4l" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.745398 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/097fc905-c9d4-49b9-a454-2aaa1b8ad22f-logs\") pod \"horizon-86bd78dcb9-8zs7w\" (UID: \"097fc905-c9d4-49b9-a454-2aaa1b8ad22f\") " pod="openstack/horizon-86bd78dcb9-8zs7w" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.745512 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17b32c97-cacb-4398-a4f9-7149fcfa178d-config\") pod \"dnsmasq-dns-847c4cc679-jgz4l\" (UID: \"17b32c97-cacb-4398-a4f9-7149fcfa178d\") " pod="openstack/dnsmasq-dns-847c4cc679-jgz4l" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.745613 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4e0e250-719c-4202-b157-e8af7f3a4441-scripts\") pod \"keystone-bootstrap-7p8cq\" (UID: \"a4e0e250-719c-4202-b157-e8af7f3a4441\") " pod="openstack/keystone-bootstrap-7p8cq" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.745842 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a4e0e250-719c-4202-b157-e8af7f3a4441-credential-keys\") pod \"keystone-bootstrap-7p8cq\" (UID: \"a4e0e250-719c-4202-b157-e8af7f3a4441\") " pod="openstack/keystone-bootstrap-7p8cq" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.745933 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/097fc905-c9d4-49b9-a454-2aaa1b8ad22f-scripts\") pod \"horizon-86bd78dcb9-8zs7w\" (UID: \"097fc905-c9d4-49b9-a454-2aaa1b8ad22f\") " pod="openstack/horizon-86bd78dcb9-8zs7w" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.745981 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a4e0e250-719c-4202-b157-e8af7f3a4441-fernet-keys\") pod \"keystone-bootstrap-7p8cq\" (UID: \"a4e0e250-719c-4202-b157-e8af7f3a4441\") " pod="openstack/keystone-bootstrap-7p8cq" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.746001 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/097fc905-c9d4-49b9-a454-2aaa1b8ad22f-config-data\") pod \"horizon-86bd78dcb9-8zs7w\" (UID: \"097fc905-c9d4-49b9-a454-2aaa1b8ad22f\") " pod="openstack/horizon-86bd78dcb9-8zs7w" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.756286 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4e0e250-719c-4202-b157-e8af7f3a4441-scripts\") pod \"keystone-bootstrap-7p8cq\" (UID: \"a4e0e250-719c-4202-b157-e8af7f3a4441\") " pod="openstack/keystone-bootstrap-7p8cq" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.760841 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4e0e250-719c-4202-b157-e8af7f3a4441-combined-ca-bundle\") pod \"keystone-bootstrap-7p8cq\" (UID: \"a4e0e250-719c-4202-b157-e8af7f3a4441\") " pod="openstack/keystone-bootstrap-7p8cq" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.762333 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a4e0e250-719c-4202-b157-e8af7f3a4441-credential-keys\") pod \"keystone-bootstrap-7p8cq\" (UID: \"a4e0e250-719c-4202-b157-e8af7f3a4441\") " pod="openstack/keystone-bootstrap-7p8cq" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.774933 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4e0e250-719c-4202-b157-e8af7f3a4441-config-data\") pod \"keystone-bootstrap-7p8cq\" (UID: \"a4e0e250-719c-4202-b157-e8af7f3a4441\") " pod="openstack/keystone-bootstrap-7p8cq" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.775661 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a4e0e250-719c-4202-b157-e8af7f3a4441-fernet-keys\") pod \"keystone-bootstrap-7p8cq\" (UID: \"a4e0e250-719c-4202-b157-e8af7f3a4441\") " pod="openstack/keystone-bootstrap-7p8cq" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.795706 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68hdx\" (UniqueName: \"kubernetes.io/projected/a4e0e250-719c-4202-b157-e8af7f3a4441-kube-api-access-68hdx\") pod \"keystone-bootstrap-7p8cq\" (UID: \"a4e0e250-719c-4202-b157-e8af7f3a4441\") " pod="openstack/keystone-bootstrap-7p8cq" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.829736 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7p8cq" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.847203 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/097fc905-c9d4-49b9-a454-2aaa1b8ad22f-horizon-secret-key\") pod \"horizon-86bd78dcb9-8zs7w\" (UID: \"097fc905-c9d4-49b9-a454-2aaa1b8ad22f\") " pod="openstack/horizon-86bd78dcb9-8zs7w" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.847245 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17b32c97-cacb-4398-a4f9-7149fcfa178d-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-jgz4l\" (UID: \"17b32c97-cacb-4398-a4f9-7149fcfa178d\") " pod="openstack/dnsmasq-dns-847c4cc679-jgz4l" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.847270 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mzq5\" (UniqueName: \"kubernetes.io/projected/097fc905-c9d4-49b9-a454-2aaa1b8ad22f-kube-api-access-6mzq5\") pod \"horizon-86bd78dcb9-8zs7w\" (UID: \"097fc905-c9d4-49b9-a454-2aaa1b8ad22f\") " pod="openstack/horizon-86bd78dcb9-8zs7w" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.847291 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17b32c97-cacb-4398-a4f9-7149fcfa178d-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-jgz4l\" (UID: \"17b32c97-cacb-4398-a4f9-7149fcfa178d\") " pod="openstack/dnsmasq-dns-847c4cc679-jgz4l" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.847319 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-257b9\" (UniqueName: \"kubernetes.io/projected/17b32c97-cacb-4398-a4f9-7149fcfa178d-kube-api-access-257b9\") pod \"dnsmasq-dns-847c4cc679-jgz4l\" (UID: \"17b32c97-cacb-4398-a4f9-7149fcfa178d\") " pod="openstack/dnsmasq-dns-847c4cc679-jgz4l" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.847344 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17b32c97-cacb-4398-a4f9-7149fcfa178d-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-jgz4l\" (UID: \"17b32c97-cacb-4398-a4f9-7149fcfa178d\") " pod="openstack/dnsmasq-dns-847c4cc679-jgz4l" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.847364 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/097fc905-c9d4-49b9-a454-2aaa1b8ad22f-logs\") pod \"horizon-86bd78dcb9-8zs7w\" (UID: \"097fc905-c9d4-49b9-a454-2aaa1b8ad22f\") " pod="openstack/horizon-86bd78dcb9-8zs7w" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.847381 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17b32c97-cacb-4398-a4f9-7149fcfa178d-config\") pod \"dnsmasq-dns-847c4cc679-jgz4l\" (UID: \"17b32c97-cacb-4398-a4f9-7149fcfa178d\") " pod="openstack/dnsmasq-dns-847c4cc679-jgz4l" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.847433 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/097fc905-c9d4-49b9-a454-2aaa1b8ad22f-scripts\") pod \"horizon-86bd78dcb9-8zs7w\" (UID: \"097fc905-c9d4-49b9-a454-2aaa1b8ad22f\") " pod="openstack/horizon-86bd78dcb9-8zs7w" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.847464 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/097fc905-c9d4-49b9-a454-2aaa1b8ad22f-config-data\") pod \"horizon-86bd78dcb9-8zs7w\" (UID: \"097fc905-c9d4-49b9-a454-2aaa1b8ad22f\") " pod="openstack/horizon-86bd78dcb9-8zs7w" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.847488 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17b32c97-cacb-4398-a4f9-7149fcfa178d-dns-svc\") pod \"dnsmasq-dns-847c4cc679-jgz4l\" (UID: \"17b32c97-cacb-4398-a4f9-7149fcfa178d\") " pod="openstack/dnsmasq-dns-847c4cc679-jgz4l" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.850233 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/097fc905-c9d4-49b9-a454-2aaa1b8ad22f-logs\") pod \"horizon-86bd78dcb9-8zs7w\" (UID: \"097fc905-c9d4-49b9-a454-2aaa1b8ad22f\") " pod="openstack/horizon-86bd78dcb9-8zs7w" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.855089 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/097fc905-c9d4-49b9-a454-2aaa1b8ad22f-scripts\") pod \"horizon-86bd78dcb9-8zs7w\" (UID: \"097fc905-c9d4-49b9-a454-2aaa1b8ad22f\") " pod="openstack/horizon-86bd78dcb9-8zs7w" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.856332 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/097fc905-c9d4-49b9-a454-2aaa1b8ad22f-config-data\") pod \"horizon-86bd78dcb9-8zs7w\" (UID: \"097fc905-c9d4-49b9-a454-2aaa1b8ad22f\") " pod="openstack/horizon-86bd78dcb9-8zs7w" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.860036 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17b32c97-cacb-4398-a4f9-7149fcfa178d-dns-svc\") pod \"dnsmasq-dns-847c4cc679-jgz4l\" (UID: \"17b32c97-cacb-4398-a4f9-7149fcfa178d\") " pod="openstack/dnsmasq-dns-847c4cc679-jgz4l" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.863690 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17b32c97-cacb-4398-a4f9-7149fcfa178d-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-jgz4l\" (UID: \"17b32c97-cacb-4398-a4f9-7149fcfa178d\") " pod="openstack/dnsmasq-dns-847c4cc679-jgz4l" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.863811 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17b32c97-cacb-4398-a4f9-7149fcfa178d-config\") pod \"dnsmasq-dns-847c4cc679-jgz4l\" (UID: \"17b32c97-cacb-4398-a4f9-7149fcfa178d\") " pod="openstack/dnsmasq-dns-847c4cc679-jgz4l" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.863860 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17b32c97-cacb-4398-a4f9-7149fcfa178d-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-jgz4l\" (UID: \"17b32c97-cacb-4398-a4f9-7149fcfa178d\") " pod="openstack/dnsmasq-dns-847c4cc679-jgz4l" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.864320 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17b32c97-cacb-4398-a4f9-7149fcfa178d-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-jgz4l\" (UID: \"17b32c97-cacb-4398-a4f9-7149fcfa178d\") " pod="openstack/dnsmasq-dns-847c4cc679-jgz4l" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.865635 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/097fc905-c9d4-49b9-a454-2aaa1b8ad22f-horizon-secret-key\") pod \"horizon-86bd78dcb9-8zs7w\" (UID: \"097fc905-c9d4-49b9-a454-2aaa1b8ad22f\") " pod="openstack/horizon-86bd78dcb9-8zs7w" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.886137 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-7gmm7"] Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.887406 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7gmm7" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.893052 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-nzb8d" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.893492 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.893710 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.912929 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mzq5\" (UniqueName: \"kubernetes.io/projected/097fc905-c9d4-49b9-a454-2aaa1b8ad22f-kube-api-access-6mzq5\") pod \"horizon-86bd78dcb9-8zs7w\" (UID: \"097fc905-c9d4-49b9-a454-2aaa1b8ad22f\") " pod="openstack/horizon-86bd78dcb9-8zs7w" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.931163 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-mdbj4"] Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.932736 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mdbj4" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.936234 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-wsbsj" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.937206 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.936804 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.942725 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-257b9\" (UniqueName: \"kubernetes.io/projected/17b32c97-cacb-4398-a4f9-7149fcfa178d-kube-api-access-257b9\") pod \"dnsmasq-dns-847c4cc679-jgz4l\" (UID: \"17b32c97-cacb-4398-a4f9-7149fcfa178d\") " pod="openstack/dnsmasq-dns-847c4cc679-jgz4l" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.946311 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-jgz4l" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.948293 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/614bd6cb-e60d-4c28-9e7e-132ab3040deb-config\") pod \"neutron-db-sync-mdbj4\" (UID: \"614bd6cb-e60d-4c28-9e7e-132ab3040deb\") " pod="openstack/neutron-db-sync-mdbj4" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.950949 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7gmm7"] Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.962250 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffc75dac-d7b0-41ce-ac4d-94f251036f95-etc-machine-id\") pod \"cinder-db-sync-7gmm7\" (UID: \"ffc75dac-d7b0-41ce-ac4d-94f251036f95\") " pod="openstack/cinder-db-sync-7gmm7" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.969243 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjw66\" (UniqueName: \"kubernetes.io/projected/ffc75dac-d7b0-41ce-ac4d-94f251036f95-kube-api-access-mjw66\") pod \"cinder-db-sync-7gmm7\" (UID: \"ffc75dac-d7b0-41ce-ac4d-94f251036f95\") " pod="openstack/cinder-db-sync-7gmm7" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.969411 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7gkz\" (UniqueName: \"kubernetes.io/projected/614bd6cb-e60d-4c28-9e7e-132ab3040deb-kube-api-access-k7gkz\") pod \"neutron-db-sync-mdbj4\" (UID: \"614bd6cb-e60d-4c28-9e7e-132ab3040deb\") " pod="openstack/neutron-db-sync-mdbj4" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.969503 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffc75dac-d7b0-41ce-ac4d-94f251036f95-config-data\") pod \"cinder-db-sync-7gmm7\" (UID: \"ffc75dac-d7b0-41ce-ac4d-94f251036f95\") " pod="openstack/cinder-db-sync-7gmm7" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.969592 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffc75dac-d7b0-41ce-ac4d-94f251036f95-combined-ca-bundle\") pod \"cinder-db-sync-7gmm7\" (UID: \"ffc75dac-d7b0-41ce-ac4d-94f251036f95\") " pod="openstack/cinder-db-sync-7gmm7" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.969700 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/614bd6cb-e60d-4c28-9e7e-132ab3040deb-combined-ca-bundle\") pod \"neutron-db-sync-mdbj4\" (UID: \"614bd6cb-e60d-4c28-9e7e-132ab3040deb\") " pod="openstack/neutron-db-sync-mdbj4" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.969925 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffc75dac-d7b0-41ce-ac4d-94f251036f95-scripts\") pod \"cinder-db-sync-7gmm7\" (UID: \"ffc75dac-d7b0-41ce-ac4d-94f251036f95\") " pod="openstack/cinder-db-sync-7gmm7" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.970095 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ffc75dac-d7b0-41ce-ac4d-94f251036f95-db-sync-config-data\") pod \"cinder-db-sync-7gmm7\" (UID: \"ffc75dac-d7b0-41ce-ac4d-94f251036f95\") " pod="openstack/cinder-db-sync-7gmm7" Dec 05 19:23:55 crc kubenswrapper[4828]: I1205 19:23:55.973440 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-mdbj4"] Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.064748 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-77b4ccd85-stwwx"] Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.066099 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77b4ccd85-stwwx" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.078969 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77b4ccd85-stwwx"] Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.079695 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ffc75dac-d7b0-41ce-ac4d-94f251036f95-db-sync-config-data\") pod \"cinder-db-sync-7gmm7\" (UID: \"ffc75dac-d7b0-41ce-ac4d-94f251036f95\") " pod="openstack/cinder-db-sync-7gmm7" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.079756 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/614bd6cb-e60d-4c28-9e7e-132ab3040deb-config\") pod \"neutron-db-sync-mdbj4\" (UID: \"614bd6cb-e60d-4c28-9e7e-132ab3040deb\") " pod="openstack/neutron-db-sync-mdbj4" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.079779 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffc75dac-d7b0-41ce-ac4d-94f251036f95-etc-machine-id\") pod \"cinder-db-sync-7gmm7\" (UID: \"ffc75dac-d7b0-41ce-ac4d-94f251036f95\") " pod="openstack/cinder-db-sync-7gmm7" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.079806 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjw66\" (UniqueName: \"kubernetes.io/projected/ffc75dac-d7b0-41ce-ac4d-94f251036f95-kube-api-access-mjw66\") pod \"cinder-db-sync-7gmm7\" (UID: \"ffc75dac-d7b0-41ce-ac4d-94f251036f95\") " pod="openstack/cinder-db-sync-7gmm7" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.079850 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7gkz\" (UniqueName: \"kubernetes.io/projected/614bd6cb-e60d-4c28-9e7e-132ab3040deb-kube-api-access-k7gkz\") pod \"neutron-db-sync-mdbj4\" (UID: \"614bd6cb-e60d-4c28-9e7e-132ab3040deb\") " pod="openstack/neutron-db-sync-mdbj4" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.079866 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffc75dac-d7b0-41ce-ac4d-94f251036f95-config-data\") pod \"cinder-db-sync-7gmm7\" (UID: \"ffc75dac-d7b0-41ce-ac4d-94f251036f95\") " pod="openstack/cinder-db-sync-7gmm7" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.079880 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffc75dac-d7b0-41ce-ac4d-94f251036f95-combined-ca-bundle\") pod \"cinder-db-sync-7gmm7\" (UID: \"ffc75dac-d7b0-41ce-ac4d-94f251036f95\") " pod="openstack/cinder-db-sync-7gmm7" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.079898 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/614bd6cb-e60d-4c28-9e7e-132ab3040deb-combined-ca-bundle\") pod \"neutron-db-sync-mdbj4\" (UID: \"614bd6cb-e60d-4c28-9e7e-132ab3040deb\") " pod="openstack/neutron-db-sync-mdbj4" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.079944 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffc75dac-d7b0-41ce-ac4d-94f251036f95-scripts\") pod \"cinder-db-sync-7gmm7\" (UID: \"ffc75dac-d7b0-41ce-ac4d-94f251036f95\") " pod="openstack/cinder-db-sync-7gmm7" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.081455 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffc75dac-d7b0-41ce-ac4d-94f251036f95-etc-machine-id\") pod \"cinder-db-sync-7gmm7\" (UID: \"ffc75dac-d7b0-41ce-ac4d-94f251036f95\") " pod="openstack/cinder-db-sync-7gmm7" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.092474 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ffc75dac-d7b0-41ce-ac4d-94f251036f95-db-sync-config-data\") pod \"cinder-db-sync-7gmm7\" (UID: \"ffc75dac-d7b0-41ce-ac4d-94f251036f95\") " pod="openstack/cinder-db-sync-7gmm7" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.092902 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffc75dac-d7b0-41ce-ac4d-94f251036f95-scripts\") pod \"cinder-db-sync-7gmm7\" (UID: \"ffc75dac-d7b0-41ce-ac4d-94f251036f95\") " pod="openstack/cinder-db-sync-7gmm7" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.095030 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffc75dac-d7b0-41ce-ac4d-94f251036f95-config-data\") pod \"cinder-db-sync-7gmm7\" (UID: \"ffc75dac-d7b0-41ce-ac4d-94f251036f95\") " pod="openstack/cinder-db-sync-7gmm7" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.097537 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffc75dac-d7b0-41ce-ac4d-94f251036f95-combined-ca-bundle\") pod \"cinder-db-sync-7gmm7\" (UID: \"ffc75dac-d7b0-41ce-ac4d-94f251036f95\") " pod="openstack/cinder-db-sync-7gmm7" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.102520 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/614bd6cb-e60d-4c28-9e7e-132ab3040deb-combined-ca-bundle\") pod \"neutron-db-sync-mdbj4\" (UID: \"614bd6cb-e60d-4c28-9e7e-132ab3040deb\") " pod="openstack/neutron-db-sync-mdbj4" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.105405 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/614bd6cb-e60d-4c28-9e7e-132ab3040deb-config\") pod \"neutron-db-sync-mdbj4\" (UID: \"614bd6cb-e60d-4c28-9e7e-132ab3040deb\") " pod="openstack/neutron-db-sync-mdbj4" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.112010 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.113374 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.117683 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.118972 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7gkz\" (UniqueName: \"kubernetes.io/projected/614bd6cb-e60d-4c28-9e7e-132ab3040deb-kube-api-access-k7gkz\") pod \"neutron-db-sync-mdbj4\" (UID: \"614bd6cb-e60d-4c28-9e7e-132ab3040deb\") " pod="openstack/neutron-db-sync-mdbj4" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.120259 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.120528 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-bp8rt" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.120727 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.128203 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjw66\" (UniqueName: \"kubernetes.io/projected/ffc75dac-d7b0-41ce-ac4d-94f251036f95-kube-api-access-mjw66\") pod \"cinder-db-sync-7gmm7\" (UID: \"ffc75dac-d7b0-41ce-ac4d-94f251036f95\") " pod="openstack/cinder-db-sync-7gmm7" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.144744 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-9gl2x"] Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.146054 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9gl2x" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.147052 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86bd78dcb9-8zs7w" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.154989 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-zllbn" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.155246 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.181275 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b940b754-ad6e-454e-ab2a-242b1b63b344-config-data\") pod \"horizon-77b4ccd85-stwwx\" (UID: \"b940b754-ad6e-454e-ab2a-242b1b63b344\") " pod="openstack/horizon-77b4ccd85-stwwx" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.181315 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b940b754-ad6e-454e-ab2a-242b1b63b344-scripts\") pod \"horizon-77b4ccd85-stwwx\" (UID: \"b940b754-ad6e-454e-ab2a-242b1b63b344\") " pod="openstack/horizon-77b4ccd85-stwwx" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.181364 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " pod="openstack/glance-default-external-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.181389 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht28x\" (UniqueName: \"kubernetes.io/projected/8d22bcf5-bf39-4595-8742-5d8c3018e7bf-kube-api-access-ht28x\") pod \"barbican-db-sync-9gl2x\" (UID: \"8d22bcf5-bf39-4595-8742-5d8c3018e7bf\") " pod="openstack/barbican-db-sync-9gl2x" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.181414 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-config-data\") pod \"glance-default-external-api-0\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " pod="openstack/glance-default-external-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.181449 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " pod="openstack/glance-default-external-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.181477 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8d22bcf5-bf39-4595-8742-5d8c3018e7bf-db-sync-config-data\") pod \"barbican-db-sync-9gl2x\" (UID: \"8d22bcf5-bf39-4595-8742-5d8c3018e7bf\") " pod="openstack/barbican-db-sync-9gl2x" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.181494 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " pod="openstack/glance-default-external-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.181516 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b940b754-ad6e-454e-ab2a-242b1b63b344-horizon-secret-key\") pod \"horizon-77b4ccd85-stwwx\" (UID: \"b940b754-ad6e-454e-ab2a-242b1b63b344\") " pod="openstack/horizon-77b4ccd85-stwwx" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.181543 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d22bcf5-bf39-4595-8742-5d8c3018e7bf-combined-ca-bundle\") pod \"barbican-db-sync-9gl2x\" (UID: \"8d22bcf5-bf39-4595-8742-5d8c3018e7bf\") " pod="openstack/barbican-db-sync-9gl2x" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.181562 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzgjt\" (UniqueName: \"kubernetes.io/projected/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-kube-api-access-kzgjt\") pod \"glance-default-external-api-0\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " pod="openstack/glance-default-external-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.181581 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b940b754-ad6e-454e-ab2a-242b1b63b344-logs\") pod \"horizon-77b4ccd85-stwwx\" (UID: \"b940b754-ad6e-454e-ab2a-242b1b63b344\") " pod="openstack/horizon-77b4ccd85-stwwx" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.181608 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw5zq\" (UniqueName: \"kubernetes.io/projected/b940b754-ad6e-454e-ab2a-242b1b63b344-kube-api-access-tw5zq\") pod \"horizon-77b4ccd85-stwwx\" (UID: \"b940b754-ad6e-454e-ab2a-242b1b63b344\") " pod="openstack/horizon-77b4ccd85-stwwx" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.181630 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " pod="openstack/glance-default-external-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.185943 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.220708 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-scripts\") pod \"glance-default-external-api-0\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " pod="openstack/glance-default-external-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.220849 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-logs\") pod \"glance-default-external-api-0\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " pod="openstack/glance-default-external-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.229695 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.235656 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.236067 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.320214 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.323639 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72755327-9414-46f2-b3ed-d19120b5876e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"72755327-9414-46f2-b3ed-d19120b5876e\") " pod="openstack/ceilometer-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.323690 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72755327-9414-46f2-b3ed-d19120b5876e-config-data\") pod \"ceilometer-0\" (UID: \"72755327-9414-46f2-b3ed-d19120b5876e\") " pod="openstack/ceilometer-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.323710 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72755327-9414-46f2-b3ed-d19120b5876e-log-httpd\") pod \"ceilometer-0\" (UID: \"72755327-9414-46f2-b3ed-d19120b5876e\") " pod="openstack/ceilometer-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.323997 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-logs\") pod \"glance-default-external-api-0\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " pod="openstack/glance-default-external-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.324040 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b940b754-ad6e-454e-ab2a-242b1b63b344-config-data\") pod \"horizon-77b4ccd85-stwwx\" (UID: \"b940b754-ad6e-454e-ab2a-242b1b63b344\") " pod="openstack/horizon-77b4ccd85-stwwx" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.324088 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b940b754-ad6e-454e-ab2a-242b1b63b344-scripts\") pod \"horizon-77b4ccd85-stwwx\" (UID: \"b940b754-ad6e-454e-ab2a-242b1b63b344\") " pod="openstack/horizon-77b4ccd85-stwwx" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.324131 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " pod="openstack/glance-default-external-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.324165 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht28x\" (UniqueName: \"kubernetes.io/projected/8d22bcf5-bf39-4595-8742-5d8c3018e7bf-kube-api-access-ht28x\") pod \"barbican-db-sync-9gl2x\" (UID: \"8d22bcf5-bf39-4595-8742-5d8c3018e7bf\") " pod="openstack/barbican-db-sync-9gl2x" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.324209 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72755327-9414-46f2-b3ed-d19120b5876e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"72755327-9414-46f2-b3ed-d19120b5876e\") " pod="openstack/ceilometer-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.325280 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-logs\") pod \"glance-default-external-api-0\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " pod="openstack/glance-default-external-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.325565 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72755327-9414-46f2-b3ed-d19120b5876e-run-httpd\") pod \"ceilometer-0\" (UID: \"72755327-9414-46f2-b3ed-d19120b5876e\") " pod="openstack/ceilometer-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.325918 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mdbj4" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.326623 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b940b754-ad6e-454e-ab2a-242b1b63b344-scripts\") pod \"horizon-77b4ccd85-stwwx\" (UID: \"b940b754-ad6e-454e-ab2a-242b1b63b344\") " pod="openstack/horizon-77b4ccd85-stwwx" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.327524 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b940b754-ad6e-454e-ab2a-242b1b63b344-config-data\") pod \"horizon-77b4ccd85-stwwx\" (UID: \"b940b754-ad6e-454e-ab2a-242b1b63b344\") " pod="openstack/horizon-77b4ccd85-stwwx" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.334681 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7gmm7" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.341186 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-config-data\") pod \"glance-default-external-api-0\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " pod="openstack/glance-default-external-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.345425 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " pod="openstack/glance-default-external-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.345495 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " pod="openstack/glance-default-external-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.345514 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8d22bcf5-bf39-4595-8742-5d8c3018e7bf-db-sync-config-data\") pod \"barbican-db-sync-9gl2x\" (UID: \"8d22bcf5-bf39-4595-8742-5d8c3018e7bf\") " pod="openstack/barbican-db-sync-9gl2x" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.345557 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b940b754-ad6e-454e-ab2a-242b1b63b344-horizon-secret-key\") pod \"horizon-77b4ccd85-stwwx\" (UID: \"b940b754-ad6e-454e-ab2a-242b1b63b344\") " pod="openstack/horizon-77b4ccd85-stwwx" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.345594 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d22bcf5-bf39-4595-8742-5d8c3018e7bf-combined-ca-bundle\") pod \"barbican-db-sync-9gl2x\" (UID: \"8d22bcf5-bf39-4595-8742-5d8c3018e7bf\") " pod="openstack/barbican-db-sync-9gl2x" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.345626 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzgjt\" (UniqueName: \"kubernetes.io/projected/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-kube-api-access-kzgjt\") pod \"glance-default-external-api-0\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " pod="openstack/glance-default-external-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.345653 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b940b754-ad6e-454e-ab2a-242b1b63b344-logs\") pod \"horizon-77b4ccd85-stwwx\" (UID: \"b940b754-ad6e-454e-ab2a-242b1b63b344\") " pod="openstack/horizon-77b4ccd85-stwwx" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.345674 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72755327-9414-46f2-b3ed-d19120b5876e-scripts\") pod \"ceilometer-0\" (UID: \"72755327-9414-46f2-b3ed-d19120b5876e\") " pod="openstack/ceilometer-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.345701 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw5zq\" (UniqueName: \"kubernetes.io/projected/b940b754-ad6e-454e-ab2a-242b1b63b344-kube-api-access-tw5zq\") pod \"horizon-77b4ccd85-stwwx\" (UID: \"b940b754-ad6e-454e-ab2a-242b1b63b344\") " pod="openstack/horizon-77b4ccd85-stwwx" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.345731 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " pod="openstack/glance-default-external-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.345783 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw44h\" (UniqueName: \"kubernetes.io/projected/72755327-9414-46f2-b3ed-d19120b5876e-kube-api-access-kw44h\") pod \"ceilometer-0\" (UID: \"72755327-9414-46f2-b3ed-d19120b5876e\") " pod="openstack/ceilometer-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.345837 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-scripts\") pod \"glance-default-external-api-0\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " pod="openstack/glance-default-external-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.352147 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " pod="openstack/glance-default-external-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.354288 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b940b754-ad6e-454e-ab2a-242b1b63b344-logs\") pod \"horizon-77b4ccd85-stwwx\" (UID: \"b940b754-ad6e-454e-ab2a-242b1b63b344\") " pod="openstack/horizon-77b4ccd85-stwwx" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.354541 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.361744 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b940b754-ad6e-454e-ab2a-242b1b63b344-horizon-secret-key\") pod \"horizon-77b4ccd85-stwwx\" (UID: \"b940b754-ad6e-454e-ab2a-242b1b63b344\") " pod="openstack/horizon-77b4ccd85-stwwx" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.368057 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8d22bcf5-bf39-4595-8742-5d8c3018e7bf-db-sync-config-data\") pod \"barbican-db-sync-9gl2x\" (UID: \"8d22bcf5-bf39-4595-8742-5d8c3018e7bf\") " pod="openstack/barbican-db-sync-9gl2x" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.380003 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " pod="openstack/glance-default-external-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.381812 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d22bcf5-bf39-4595-8742-5d8c3018e7bf-combined-ca-bundle\") pod \"barbican-db-sync-9gl2x\" (UID: \"8d22bcf5-bf39-4595-8742-5d8c3018e7bf\") " pod="openstack/barbican-db-sync-9gl2x" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.382397 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-scripts\") pod \"glance-default-external-api-0\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " pod="openstack/glance-default-external-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.382913 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.382995 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " pod="openstack/glance-default-external-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.384306 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.388689 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht28x\" (UniqueName: \"kubernetes.io/projected/8d22bcf5-bf39-4595-8742-5d8c3018e7bf-kube-api-access-ht28x\") pod \"barbican-db-sync-9gl2x\" (UID: \"8d22bcf5-bf39-4595-8742-5d8c3018e7bf\") " pod="openstack/barbican-db-sync-9gl2x" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.393479 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-config-data\") pod \"glance-default-external-api-0\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " pod="openstack/glance-default-external-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.415439 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw5zq\" (UniqueName: \"kubernetes.io/projected/b940b754-ad6e-454e-ab2a-242b1b63b344-kube-api-access-tw5zq\") pod \"horizon-77b4ccd85-stwwx\" (UID: \"b940b754-ad6e-454e-ab2a-242b1b63b344\") " pod="openstack/horizon-77b4ccd85-stwwx" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.416981 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzgjt\" (UniqueName: \"kubernetes.io/projected/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-kube-api-access-kzgjt\") pod \"glance-default-external-api-0\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " pod="openstack/glance-default-external-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.427239 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77b4ccd85-stwwx" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.439721 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.439961 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.440881 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-9gl2x"] Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.453763 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.456160 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72755327-9414-46f2-b3ed-d19120b5876e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"72755327-9414-46f2-b3ed-d19120b5876e\") " pod="openstack/ceilometer-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.456227 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72755327-9414-46f2-b3ed-d19120b5876e-config-data\") pod \"ceilometer-0\" (UID: \"72755327-9414-46f2-b3ed-d19120b5876e\") " pod="openstack/ceilometer-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.456250 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72755327-9414-46f2-b3ed-d19120b5876e-log-httpd\") pod \"ceilometer-0\" (UID: \"72755327-9414-46f2-b3ed-d19120b5876e\") " pod="openstack/ceilometer-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.456987 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72755327-9414-46f2-b3ed-d19120b5876e-log-httpd\") pod \"ceilometer-0\" (UID: \"72755327-9414-46f2-b3ed-d19120b5876e\") " pod="openstack/ceilometer-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.457035 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72755327-9414-46f2-b3ed-d19120b5876e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"72755327-9414-46f2-b3ed-d19120b5876e\") " pod="openstack/ceilometer-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.457118 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72755327-9414-46f2-b3ed-d19120b5876e-run-httpd\") pod \"ceilometer-0\" (UID: \"72755327-9414-46f2-b3ed-d19120b5876e\") " pod="openstack/ceilometer-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.462444 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72755327-9414-46f2-b3ed-d19120b5876e-run-httpd\") pod \"ceilometer-0\" (UID: \"72755327-9414-46f2-b3ed-d19120b5876e\") " pod="openstack/ceilometer-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.525111 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72755327-9414-46f2-b3ed-d19120b5876e-scripts\") pod \"ceilometer-0\" (UID: \"72755327-9414-46f2-b3ed-d19120b5876e\") " pod="openstack/ceilometer-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.525270 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw44h\" (UniqueName: \"kubernetes.io/projected/72755327-9414-46f2-b3ed-d19120b5876e-kube-api-access-kw44h\") pod \"ceilometer-0\" (UID: \"72755327-9414-46f2-b3ed-d19120b5876e\") " pod="openstack/ceilometer-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.531155 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72755327-9414-46f2-b3ed-d19120b5876e-scripts\") pod \"ceilometer-0\" (UID: \"72755327-9414-46f2-b3ed-d19120b5876e\") " pod="openstack/ceilometer-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.553583 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72755327-9414-46f2-b3ed-d19120b5876e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"72755327-9414-46f2-b3ed-d19120b5876e\") " pod="openstack/ceilometer-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.555927 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " pod="openstack/glance-default-external-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.556219 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9gl2x" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.561474 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0969276-79d6-4176-9211-af61074920b1" path="/var/lib/kubelet/pods/b0969276-79d6-4176-9211-af61074920b1/volumes" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.563558 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.564362 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-jgz4l"] Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.612430 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw44h\" (UniqueName: \"kubernetes.io/projected/72755327-9414-46f2-b3ed-d19120b5876e-kube-api-access-kw44h\") pod \"ceilometer-0\" (UID: \"72755327-9414-46f2-b3ed-d19120b5876e\") " pod="openstack/ceilometer-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.641516 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8cf7f90d-1042-41f7-b072-57794b005f3d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.641913 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cf7f90d-1042-41f7-b072-57794b005f3d-logs\") pod \"glance-default-internal-api-0\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.641984 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cf7f90d-1042-41f7-b072-57794b005f3d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.642007 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-446hx\" (UniqueName: \"kubernetes.io/projected/8cf7f90d-1042-41f7-b072-57794b005f3d-kube-api-access-446hx\") pod \"glance-default-internal-api-0\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.642035 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cf7f90d-1042-41f7-b072-57794b005f3d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.642056 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cf7f90d-1042-41f7-b072-57794b005f3d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.642100 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cf7f90d-1042-41f7-b072-57794b005f3d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.642117 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.706335 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-m6tvf"] Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.707742 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-m6tvf" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.725301 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.725523 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jk4pw" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.725645 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.726126 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-m6tvf"] Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.739773 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72755327-9414-46f2-b3ed-d19120b5876e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"72755327-9414-46f2-b3ed-d19120b5876e\") " pod="openstack/ceilometer-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.744763 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cf7f90d-1042-41f7-b072-57794b005f3d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.744858 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.745018 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8cf7f90d-1042-41f7-b072-57794b005f3d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.745141 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cf7f90d-1042-41f7-b072-57794b005f3d-logs\") pod \"glance-default-internal-api-0\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.745220 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cf7f90d-1042-41f7-b072-57794b005f3d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.745274 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-446hx\" (UniqueName: \"kubernetes.io/projected/8cf7f90d-1042-41f7-b072-57794b005f3d-kube-api-access-446hx\") pod \"glance-default-internal-api-0\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.745303 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cf7f90d-1042-41f7-b072-57794b005f3d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.745365 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cf7f90d-1042-41f7-b072-57794b005f3d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.745431 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.746273 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8cf7f90d-1042-41f7-b072-57794b005f3d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.746486 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cf7f90d-1042-41f7-b072-57794b005f3d-logs\") pod \"glance-default-internal-api-0\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.757189 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72755327-9414-46f2-b3ed-d19120b5876e-config-data\") pod \"ceilometer-0\" (UID: \"72755327-9414-46f2-b3ed-d19120b5876e\") " pod="openstack/ceilometer-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.758990 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-khz2q"] Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.760724 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.764175 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cf7f90d-1042-41f7-b072-57794b005f3d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.764514 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cf7f90d-1042-41f7-b072-57794b005f3d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.764818 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cf7f90d-1042-41f7-b072-57794b005f3d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.768295 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cf7f90d-1042-41f7-b072-57794b005f3d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.771588 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-446hx\" (UniqueName: \"kubernetes.io/projected/8cf7f90d-1042-41f7-b072-57794b005f3d-kube-api-access-446hx\") pod \"glance-default-internal-api-0\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.814744 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-khz2q"] Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.814981 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.843182 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.851458 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-khz2q\" (UID: \"1dd4db34-a61b-4b8b-bc81-3458dfd1491b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.851974 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101-logs\") pod \"placement-db-sync-m6tvf\" (UID: \"bcb21ca0-c48a-4c70-bb4f-fe1f240b3101\") " pod="openstack/placement-db-sync-m6tvf" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.852301 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-khz2q\" (UID: \"1dd4db34-a61b-4b8b-bc81-3458dfd1491b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.852493 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-khz2q\" (UID: \"1dd4db34-a61b-4b8b-bc81-3458dfd1491b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.852681 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64hfq\" (UniqueName: \"kubernetes.io/projected/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-kube-api-access-64hfq\") pod \"dnsmasq-dns-785d8bcb8c-khz2q\" (UID: \"1dd4db34-a61b-4b8b-bc81-3458dfd1491b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.852875 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101-combined-ca-bundle\") pod \"placement-db-sync-m6tvf\" (UID: \"bcb21ca0-c48a-4c70-bb4f-fe1f240b3101\") " pod="openstack/placement-db-sync-m6tvf" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.853121 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101-config-data\") pod \"placement-db-sync-m6tvf\" (UID: \"bcb21ca0-c48a-4c70-bb4f-fe1f240b3101\") " pod="openstack/placement-db-sync-m6tvf" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.853283 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-khz2q\" (UID: \"1dd4db34-a61b-4b8b-bc81-3458dfd1491b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.853413 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-config\") pod \"dnsmasq-dns-785d8bcb8c-khz2q\" (UID: \"1dd4db34-a61b-4b8b-bc81-3458dfd1491b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.853578 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101-scripts\") pod \"placement-db-sync-m6tvf\" (UID: \"bcb21ca0-c48a-4c70-bb4f-fe1f240b3101\") " pod="openstack/placement-db-sync-m6tvf" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.853864 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlvmp\" (UniqueName: \"kubernetes.io/projected/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101-kube-api-access-jlvmp\") pod \"placement-db-sync-m6tvf\" (UID: \"bcb21ca0-c48a-4c70-bb4f-fe1f240b3101\") " pod="openstack/placement-db-sync-m6tvf" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.869370 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7p8cq"] Dec 05 19:23:56 crc kubenswrapper[4828]: W1205 19:23:56.885435 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4e0e250_719c_4202_b157_e8af7f3a4441.slice/crio-bb954492090a8063bd6887f6195852de71c98526db509f329167f20cdbb0dccf WatchSource:0}: Error finding container bb954492090a8063bd6887f6195852de71c98526db509f329167f20cdbb0dccf: Status 404 returned error can't find the container with id bb954492090a8063bd6887f6195852de71c98526db509f329167f20cdbb0dccf Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.911064 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.960790 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101-logs\") pod \"placement-db-sync-m6tvf\" (UID: \"bcb21ca0-c48a-4c70-bb4f-fe1f240b3101\") " pod="openstack/placement-db-sync-m6tvf" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.960877 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-khz2q\" (UID: \"1dd4db34-a61b-4b8b-bc81-3458dfd1491b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.960913 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-khz2q\" (UID: \"1dd4db34-a61b-4b8b-bc81-3458dfd1491b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.960939 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64hfq\" (UniqueName: \"kubernetes.io/projected/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-kube-api-access-64hfq\") pod \"dnsmasq-dns-785d8bcb8c-khz2q\" (UID: \"1dd4db34-a61b-4b8b-bc81-3458dfd1491b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.960954 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101-combined-ca-bundle\") pod \"placement-db-sync-m6tvf\" (UID: \"bcb21ca0-c48a-4c70-bb4f-fe1f240b3101\") " pod="openstack/placement-db-sync-m6tvf" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.960989 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101-config-data\") pod \"placement-db-sync-m6tvf\" (UID: \"bcb21ca0-c48a-4c70-bb4f-fe1f240b3101\") " pod="openstack/placement-db-sync-m6tvf" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.961005 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-khz2q\" (UID: \"1dd4db34-a61b-4b8b-bc81-3458dfd1491b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.961026 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-config\") pod \"dnsmasq-dns-785d8bcb8c-khz2q\" (UID: \"1dd4db34-a61b-4b8b-bc81-3458dfd1491b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.961046 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101-scripts\") pod \"placement-db-sync-m6tvf\" (UID: \"bcb21ca0-c48a-4c70-bb4f-fe1f240b3101\") " pod="openstack/placement-db-sync-m6tvf" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.961077 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlvmp\" (UniqueName: \"kubernetes.io/projected/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101-kube-api-access-jlvmp\") pod \"placement-db-sync-m6tvf\" (UID: \"bcb21ca0-c48a-4c70-bb4f-fe1f240b3101\") " pod="openstack/placement-db-sync-m6tvf" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.961100 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-khz2q\" (UID: \"1dd4db34-a61b-4b8b-bc81-3458dfd1491b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.962260 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-khz2q\" (UID: \"1dd4db34-a61b-4b8b-bc81-3458dfd1491b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.962406 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-config\") pod \"dnsmasq-dns-785d8bcb8c-khz2q\" (UID: \"1dd4db34-a61b-4b8b-bc81-3458dfd1491b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.962815 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-khz2q\" (UID: \"1dd4db34-a61b-4b8b-bc81-3458dfd1491b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.963053 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-khz2q\" (UID: \"1dd4db34-a61b-4b8b-bc81-3458dfd1491b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.963164 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101-logs\") pod \"placement-db-sync-m6tvf\" (UID: \"bcb21ca0-c48a-4c70-bb4f-fe1f240b3101\") " pod="openstack/placement-db-sync-m6tvf" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.964072 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-khz2q\" (UID: \"1dd4db34-a61b-4b8b-bc81-3458dfd1491b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.980525 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101-scripts\") pod \"placement-db-sync-m6tvf\" (UID: \"bcb21ca0-c48a-4c70-bb4f-fe1f240b3101\") " pod="openstack/placement-db-sync-m6tvf" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.981284 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101-config-data\") pod \"placement-db-sync-m6tvf\" (UID: \"bcb21ca0-c48a-4c70-bb4f-fe1f240b3101\") " pod="openstack/placement-db-sync-m6tvf" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.987626 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlvmp\" (UniqueName: \"kubernetes.io/projected/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101-kube-api-access-jlvmp\") pod \"placement-db-sync-m6tvf\" (UID: \"bcb21ca0-c48a-4c70-bb4f-fe1f240b3101\") " pod="openstack/placement-db-sync-m6tvf" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.993767 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64hfq\" (UniqueName: \"kubernetes.io/projected/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-kube-api-access-64hfq\") pod \"dnsmasq-dns-785d8bcb8c-khz2q\" (UID: \"1dd4db34-a61b-4b8b-bc81-3458dfd1491b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" Dec 05 19:23:56 crc kubenswrapper[4828]: I1205 19:23:56.998965 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101-combined-ca-bundle\") pod \"placement-db-sync-m6tvf\" (UID: \"bcb21ca0-c48a-4c70-bb4f-fe1f240b3101\") " pod="openstack/placement-db-sync-m6tvf" Dec 05 19:23:57 crc kubenswrapper[4828]: I1205 19:23:57.034378 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 19:23:57 crc kubenswrapper[4828]: I1205 19:23:57.087488 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-m6tvf" Dec 05 19:23:57 crc kubenswrapper[4828]: I1205 19:23:57.108756 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" Dec 05 19:23:57 crc kubenswrapper[4828]: I1205 19:23:57.111548 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-jgz4l"] Dec 05 19:23:57 crc kubenswrapper[4828]: I1205 19:23:57.295604 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-86bd78dcb9-8zs7w"] Dec 05 19:23:57 crc kubenswrapper[4828]: I1205 19:23:57.318637 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77b4ccd85-stwwx"] Dec 05 19:23:57 crc kubenswrapper[4828]: I1205 19:23:57.321015 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-jgz4l" event={"ID":"17b32c97-cacb-4398-a4f9-7149fcfa178d","Type":"ContainerStarted","Data":"1bab10976a2a9db399c74c438c5fedf6d682542f9a26c389882a7bbb2e470edc"} Dec 05 19:23:57 crc kubenswrapper[4828]: I1205 19:23:57.341767 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7p8cq" event={"ID":"a4e0e250-719c-4202-b157-e8af7f3a4441","Type":"ContainerStarted","Data":"bb954492090a8063bd6887f6195852de71c98526db509f329167f20cdbb0dccf"} Dec 05 19:23:57 crc kubenswrapper[4828]: I1205 19:23:57.430939 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7gmm7"] Dec 05 19:23:57 crc kubenswrapper[4828]: W1205 19:23:57.433861 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffc75dac_d7b0_41ce_ac4d_94f251036f95.slice/crio-013eed33e30af68389bf2830a3eaece76c89104effb51774b51df0a20ede8300 WatchSource:0}: Error finding container 013eed33e30af68389bf2830a3eaece76c89104effb51774b51df0a20ede8300: Status 404 returned error can't find the container with id 013eed33e30af68389bf2830a3eaece76c89104effb51774b51df0a20ede8300 Dec 05 19:23:57 crc kubenswrapper[4828]: W1205 19:23:57.442455 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod614bd6cb_e60d_4c28_9e7e_132ab3040deb.slice/crio-018a86f13fe71d060ea547b01ac6ebaf64427fe5d419868042fabe717c37012c WatchSource:0}: Error finding container 018a86f13fe71d060ea547b01ac6ebaf64427fe5d419868042fabe717c37012c: Status 404 returned error can't find the container with id 018a86f13fe71d060ea547b01ac6ebaf64427fe5d419868042fabe717c37012c Dec 05 19:23:57 crc kubenswrapper[4828]: I1205 19:23:57.442978 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-mdbj4"] Dec 05 19:23:57 crc kubenswrapper[4828]: I1205 19:23:57.592994 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-9gl2x"] Dec 05 19:23:57 crc kubenswrapper[4828]: I1205 19:23:57.727497 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:23:57 crc kubenswrapper[4828]: I1205 19:23:57.781165 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-m6tvf"] Dec 05 19:23:57 crc kubenswrapper[4828]: I1205 19:23:57.793103 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-khz2q"] Dec 05 19:23:57 crc kubenswrapper[4828]: W1205 19:23:57.794046 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1dd4db34_a61b_4b8b_bc81_3458dfd1491b.slice/crio-9b34d2cf392c5c4de512dbfab7925c4c7de5ac6f08b792c174dee887ce5dc8b3 WatchSource:0}: Error finding container 9b34d2cf392c5c4de512dbfab7925c4c7de5ac6f08b792c174dee887ce5dc8b3: Status 404 returned error can't find the container with id 9b34d2cf392c5c4de512dbfab7925c4c7de5ac6f08b792c174dee887ce5dc8b3 Dec 05 19:23:57 crc kubenswrapper[4828]: I1205 19:23:57.800328 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 19:23:57 crc kubenswrapper[4828]: W1205 19:23:57.823547 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcb21ca0_c48a_4c70_bb4f_fe1f240b3101.slice/crio-614a7b4e50b01e5c8c8fe10267ee131cb6c2de6b63490a6113716f9598327834 WatchSource:0}: Error finding container 614a7b4e50b01e5c8c8fe10267ee131cb6c2de6b63490a6113716f9598327834: Status 404 returned error can't find the container with id 614a7b4e50b01e5c8c8fe10267ee131cb6c2de6b63490a6113716f9598327834 Dec 05 19:23:57 crc kubenswrapper[4828]: I1205 19:23:57.929349 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 19:23:58 crc kubenswrapper[4828]: I1205 19:23:58.365925 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86bd78dcb9-8zs7w" event={"ID":"097fc905-c9d4-49b9-a454-2aaa1b8ad22f","Type":"ContainerStarted","Data":"be9edb235e8903407b929df285546f44ee77bbfce17a9f63777d69f627aa1249"} Dec 05 19:23:58 crc kubenswrapper[4828]: I1205 19:23:58.376005 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mdbj4" event={"ID":"614bd6cb-e60d-4c28-9e7e-132ab3040deb","Type":"ContainerStarted","Data":"3f2c6eed6861458f9e6b32c41a9d0da2164f1f361c21514c5879a2593e92fe76"} Dec 05 19:23:58 crc kubenswrapper[4828]: I1205 19:23:58.376061 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mdbj4" event={"ID":"614bd6cb-e60d-4c28-9e7e-132ab3040deb","Type":"ContainerStarted","Data":"018a86f13fe71d060ea547b01ac6ebaf64427fe5d419868042fabe717c37012c"} Dec 05 19:23:58 crc kubenswrapper[4828]: I1205 19:23:58.382644 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77b4ccd85-stwwx" event={"ID":"b940b754-ad6e-454e-ab2a-242b1b63b344","Type":"ContainerStarted","Data":"7bddc2b78395c6b31d8940454d8c4a3248c9e9823589266f83ccc629fb2e6412"} Dec 05 19:23:58 crc kubenswrapper[4828]: I1205 19:23:58.385075 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9gl2x" event={"ID":"8d22bcf5-bf39-4595-8742-5d8c3018e7bf","Type":"ContainerStarted","Data":"7e58f6cef8690c5dcd8fac03cdb31dfe8917f7ded35164b6a6fd2373818165d2"} Dec 05 19:23:58 crc kubenswrapper[4828]: I1205 19:23:58.390681 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72755327-9414-46f2-b3ed-d19120b5876e","Type":"ContainerStarted","Data":"7d2b440a6f49f3b62a263944e77046573dd0f17b4046655c22777c4fa5b43bc0"} Dec 05 19:23:58 crc kubenswrapper[4828]: I1205 19:23:58.393712 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7gmm7" event={"ID":"ffc75dac-d7b0-41ce-ac4d-94f251036f95","Type":"ContainerStarted","Data":"013eed33e30af68389bf2830a3eaece76c89104effb51774b51df0a20ede8300"} Dec 05 19:23:58 crc kubenswrapper[4828]: I1205 19:23:58.405356 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-mdbj4" podStartSLOduration=3.4053249 podStartE2EDuration="3.4053249s" podCreationTimestamp="2025-12-05 19:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:23:58.400043585 +0000 UTC m=+1216.295265901" watchObservedRunningTime="2025-12-05 19:23:58.4053249 +0000 UTC m=+1216.300547206" Dec 05 19:23:58 crc kubenswrapper[4828]: I1205 19:23:58.411799 4828 generic.go:334] "Generic (PLEG): container finished" podID="17b32c97-cacb-4398-a4f9-7149fcfa178d" containerID="35cf44362a27fad6a19ce57122db266c1fe2e91bd6edcdbee372b6db6ec07ec1" exitCode=0 Dec 05 19:23:58 crc kubenswrapper[4828]: I1205 19:23:58.411962 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-jgz4l" event={"ID":"17b32c97-cacb-4398-a4f9-7149fcfa178d","Type":"ContainerDied","Data":"35cf44362a27fad6a19ce57122db266c1fe2e91bd6edcdbee372b6db6ec07ec1"} Dec 05 19:23:58 crc kubenswrapper[4828]: I1205 19:23:58.415766 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8cf7f90d-1042-41f7-b072-57794b005f3d","Type":"ContainerStarted","Data":"d0b8909e1e778c84688a78483d02a02ad3d696282effecbaddd2adf210b33e68"} Dec 05 19:23:58 crc kubenswrapper[4828]: I1205 19:23:58.420098 4828 generic.go:334] "Generic (PLEG): container finished" podID="1dd4db34-a61b-4b8b-bc81-3458dfd1491b" containerID="d33743109ee3f46be42225d0b4190a263fee6dc17dc4aa055bc08ba905559bbb" exitCode=0 Dec 05 19:23:58 crc kubenswrapper[4828]: I1205 19:23:58.420173 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" event={"ID":"1dd4db34-a61b-4b8b-bc81-3458dfd1491b","Type":"ContainerDied","Data":"d33743109ee3f46be42225d0b4190a263fee6dc17dc4aa055bc08ba905559bbb"} Dec 05 19:23:58 crc kubenswrapper[4828]: I1205 19:23:58.420200 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" event={"ID":"1dd4db34-a61b-4b8b-bc81-3458dfd1491b","Type":"ContainerStarted","Data":"9b34d2cf392c5c4de512dbfab7925c4c7de5ac6f08b792c174dee887ce5dc8b3"} Dec 05 19:23:58 crc kubenswrapper[4828]: I1205 19:23:58.422995 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-m6tvf" event={"ID":"bcb21ca0-c48a-4c70-bb4f-fe1f240b3101","Type":"ContainerStarted","Data":"614a7b4e50b01e5c8c8fe10267ee131cb6c2de6b63490a6113716f9598327834"} Dec 05 19:23:58 crc kubenswrapper[4828]: I1205 19:23:58.484814 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a","Type":"ContainerStarted","Data":"e41d75ee5409c20bd94f68c230c9c94b9492cea59fc5c4b19bbd82b0701fba2e"} Dec 05 19:23:58 crc kubenswrapper[4828]: I1205 19:23:58.492910 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7p8cq" event={"ID":"a4e0e250-719c-4202-b157-e8af7f3a4441","Type":"ContainerStarted","Data":"0262f380f91c94c1037da4a9a9d49de5b716f87161a7cf28c03238b52e61335a"} Dec 05 19:23:58 crc kubenswrapper[4828]: I1205 19:23:58.545792 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-7p8cq" podStartSLOduration=3.545775459 podStartE2EDuration="3.545775459s" podCreationTimestamp="2025-12-05 19:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:23:58.543311341 +0000 UTC m=+1216.438533647" watchObservedRunningTime="2025-12-05 19:23:58.545775459 +0000 UTC m=+1216.440997765" Dec 05 19:23:58 crc kubenswrapper[4828]: I1205 19:23:58.742240 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 19:23:58 crc kubenswrapper[4828]: I1205 19:23:58.799639 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77b4ccd85-stwwx"] Dec 05 19:23:58 crc kubenswrapper[4828]: I1205 19:23:58.846116 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-b878f489-x2jpv"] Dec 05 19:23:58 crc kubenswrapper[4828]: I1205 19:23:58.847511 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b878f489-x2jpv" Dec 05 19:23:58 crc kubenswrapper[4828]: I1205 19:23:58.865832 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 19:23:58 crc kubenswrapper[4828]: I1205 19:23:58.886277 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b878f489-x2jpv"] Dec 05 19:23:58 crc kubenswrapper[4828]: I1205 19:23:58.916036 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ada41f83-4947-4d14-a1c1-c1dd44f7d656-config-data\") pod \"horizon-b878f489-x2jpv\" (UID: \"ada41f83-4947-4d14-a1c1-c1dd44f7d656\") " pod="openstack/horizon-b878f489-x2jpv" Dec 05 19:23:58 crc kubenswrapper[4828]: I1205 19:23:58.916106 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ada41f83-4947-4d14-a1c1-c1dd44f7d656-horizon-secret-key\") pod \"horizon-b878f489-x2jpv\" (UID: \"ada41f83-4947-4d14-a1c1-c1dd44f7d656\") " pod="openstack/horizon-b878f489-x2jpv" Dec 05 19:23:58 crc kubenswrapper[4828]: I1205 19:23:58.916180 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ada41f83-4947-4d14-a1c1-c1dd44f7d656-logs\") pod \"horizon-b878f489-x2jpv\" (UID: \"ada41f83-4947-4d14-a1c1-c1dd44f7d656\") " pod="openstack/horizon-b878f489-x2jpv" Dec 05 19:23:58 crc kubenswrapper[4828]: I1205 19:23:58.916315 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd6dq\" (UniqueName: \"kubernetes.io/projected/ada41f83-4947-4d14-a1c1-c1dd44f7d656-kube-api-access-qd6dq\") pod \"horizon-b878f489-x2jpv\" (UID: \"ada41f83-4947-4d14-a1c1-c1dd44f7d656\") " pod="openstack/horizon-b878f489-x2jpv" Dec 05 19:23:58 crc kubenswrapper[4828]: I1205 19:23:58.916371 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ada41f83-4947-4d14-a1c1-c1dd44f7d656-scripts\") pod \"horizon-b878f489-x2jpv\" (UID: \"ada41f83-4947-4d14-a1c1-c1dd44f7d656\") " pod="openstack/horizon-b878f489-x2jpv" Dec 05 19:23:58 crc kubenswrapper[4828]: I1205 19:23:58.931769 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.022326 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd6dq\" (UniqueName: \"kubernetes.io/projected/ada41f83-4947-4d14-a1c1-c1dd44f7d656-kube-api-access-qd6dq\") pod \"horizon-b878f489-x2jpv\" (UID: \"ada41f83-4947-4d14-a1c1-c1dd44f7d656\") " pod="openstack/horizon-b878f489-x2jpv" Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.022700 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ada41f83-4947-4d14-a1c1-c1dd44f7d656-scripts\") pod \"horizon-b878f489-x2jpv\" (UID: \"ada41f83-4947-4d14-a1c1-c1dd44f7d656\") " pod="openstack/horizon-b878f489-x2jpv" Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.022765 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ada41f83-4947-4d14-a1c1-c1dd44f7d656-config-data\") pod \"horizon-b878f489-x2jpv\" (UID: \"ada41f83-4947-4d14-a1c1-c1dd44f7d656\") " pod="openstack/horizon-b878f489-x2jpv" Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.022791 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ada41f83-4947-4d14-a1c1-c1dd44f7d656-horizon-secret-key\") pod \"horizon-b878f489-x2jpv\" (UID: \"ada41f83-4947-4d14-a1c1-c1dd44f7d656\") " pod="openstack/horizon-b878f489-x2jpv" Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.022848 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ada41f83-4947-4d14-a1c1-c1dd44f7d656-logs\") pod \"horizon-b878f489-x2jpv\" (UID: \"ada41f83-4947-4d14-a1c1-c1dd44f7d656\") " pod="openstack/horizon-b878f489-x2jpv" Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.023359 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ada41f83-4947-4d14-a1c1-c1dd44f7d656-logs\") pod \"horizon-b878f489-x2jpv\" (UID: \"ada41f83-4947-4d14-a1c1-c1dd44f7d656\") " pod="openstack/horizon-b878f489-x2jpv" Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.023573 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ada41f83-4947-4d14-a1c1-c1dd44f7d656-scripts\") pod \"horizon-b878f489-x2jpv\" (UID: \"ada41f83-4947-4d14-a1c1-c1dd44f7d656\") " pod="openstack/horizon-b878f489-x2jpv" Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.025145 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ada41f83-4947-4d14-a1c1-c1dd44f7d656-config-data\") pod \"horizon-b878f489-x2jpv\" (UID: \"ada41f83-4947-4d14-a1c1-c1dd44f7d656\") " pod="openstack/horizon-b878f489-x2jpv" Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.039994 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ada41f83-4947-4d14-a1c1-c1dd44f7d656-horizon-secret-key\") pod \"horizon-b878f489-x2jpv\" (UID: \"ada41f83-4947-4d14-a1c1-c1dd44f7d656\") " pod="openstack/horizon-b878f489-x2jpv" Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.040213 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd6dq\" (UniqueName: \"kubernetes.io/projected/ada41f83-4947-4d14-a1c1-c1dd44f7d656-kube-api-access-qd6dq\") pod \"horizon-b878f489-x2jpv\" (UID: \"ada41f83-4947-4d14-a1c1-c1dd44f7d656\") " pod="openstack/horizon-b878f489-x2jpv" Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.153448 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-jgz4l" Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.196259 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b878f489-x2jpv" Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.227488 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17b32c97-cacb-4398-a4f9-7149fcfa178d-dns-swift-storage-0\") pod \"17b32c97-cacb-4398-a4f9-7149fcfa178d\" (UID: \"17b32c97-cacb-4398-a4f9-7149fcfa178d\") " Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.227548 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17b32c97-cacb-4398-a4f9-7149fcfa178d-ovsdbserver-nb\") pod \"17b32c97-cacb-4398-a4f9-7149fcfa178d\" (UID: \"17b32c97-cacb-4398-a4f9-7149fcfa178d\") " Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.227600 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17b32c97-cacb-4398-a4f9-7149fcfa178d-config\") pod \"17b32c97-cacb-4398-a4f9-7149fcfa178d\" (UID: \"17b32c97-cacb-4398-a4f9-7149fcfa178d\") " Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.227634 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-257b9\" (UniqueName: \"kubernetes.io/projected/17b32c97-cacb-4398-a4f9-7149fcfa178d-kube-api-access-257b9\") pod \"17b32c97-cacb-4398-a4f9-7149fcfa178d\" (UID: \"17b32c97-cacb-4398-a4f9-7149fcfa178d\") " Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.227663 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17b32c97-cacb-4398-a4f9-7149fcfa178d-ovsdbserver-sb\") pod \"17b32c97-cacb-4398-a4f9-7149fcfa178d\" (UID: \"17b32c97-cacb-4398-a4f9-7149fcfa178d\") " Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.227695 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17b32c97-cacb-4398-a4f9-7149fcfa178d-dns-svc\") pod \"17b32c97-cacb-4398-a4f9-7149fcfa178d\" (UID: \"17b32c97-cacb-4398-a4f9-7149fcfa178d\") " Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.238011 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17b32c97-cacb-4398-a4f9-7149fcfa178d-kube-api-access-257b9" (OuterVolumeSpecName: "kube-api-access-257b9") pod "17b32c97-cacb-4398-a4f9-7149fcfa178d" (UID: "17b32c97-cacb-4398-a4f9-7149fcfa178d"). InnerVolumeSpecName "kube-api-access-257b9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.260436 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17b32c97-cacb-4398-a4f9-7149fcfa178d-config" (OuterVolumeSpecName: "config") pod "17b32c97-cacb-4398-a4f9-7149fcfa178d" (UID: "17b32c97-cacb-4398-a4f9-7149fcfa178d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.263446 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17b32c97-cacb-4398-a4f9-7149fcfa178d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "17b32c97-cacb-4398-a4f9-7149fcfa178d" (UID: "17b32c97-cacb-4398-a4f9-7149fcfa178d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.266072 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17b32c97-cacb-4398-a4f9-7149fcfa178d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "17b32c97-cacb-4398-a4f9-7149fcfa178d" (UID: "17b32c97-cacb-4398-a4f9-7149fcfa178d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.291303 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17b32c97-cacb-4398-a4f9-7149fcfa178d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "17b32c97-cacb-4398-a4f9-7149fcfa178d" (UID: "17b32c97-cacb-4398-a4f9-7149fcfa178d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.297921 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17b32c97-cacb-4398-a4f9-7149fcfa178d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "17b32c97-cacb-4398-a4f9-7149fcfa178d" (UID: "17b32c97-cacb-4398-a4f9-7149fcfa178d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.329991 4828 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17b32c97-cacb-4398-a4f9-7149fcfa178d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.330026 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17b32c97-cacb-4398-a4f9-7149fcfa178d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.330039 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17b32c97-cacb-4398-a4f9-7149fcfa178d-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.330052 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-257b9\" (UniqueName: \"kubernetes.io/projected/17b32c97-cacb-4398-a4f9-7149fcfa178d-kube-api-access-257b9\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.330277 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17b32c97-cacb-4398-a4f9-7149fcfa178d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.330292 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17b32c97-cacb-4398-a4f9-7149fcfa178d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.511168 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a","Type":"ContainerStarted","Data":"538977d0b7d9845f46256cecc5720933660f4b760c76688a4b1246a27ff6ba0d"} Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.514965 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-jgz4l" event={"ID":"17b32c97-cacb-4398-a4f9-7149fcfa178d","Type":"ContainerDied","Data":"1bab10976a2a9db399c74c438c5fedf6d682542f9a26c389882a7bbb2e470edc"} Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.515018 4828 scope.go:117] "RemoveContainer" containerID="35cf44362a27fad6a19ce57122db266c1fe2e91bd6edcdbee372b6db6ec07ec1" Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.515159 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-jgz4l" Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.526309 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8cf7f90d-1042-41f7-b072-57794b005f3d","Type":"ContainerStarted","Data":"b17708d34872bb0deb8095e27e925a5e14b1b238a131014e69b851fcd20bc402"} Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.541845 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" event={"ID":"1dd4db34-a61b-4b8b-bc81-3458dfd1491b","Type":"ContainerStarted","Data":"948bf1168f1fe5e8ef0204bb17b2697b5a80db6cb78befbc95abf38a7ba890c7"} Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.556146 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.661121 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-jgz4l"] Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.673793 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-jgz4l"] Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.678995 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" podStartSLOduration=3.678975791 podStartE2EDuration="3.678975791s" podCreationTimestamp="2025-12-05 19:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:23:59.613437616 +0000 UTC m=+1217.508659922" watchObservedRunningTime="2025-12-05 19:23:59.678975791 +0000 UTC m=+1217.574198097" Dec 05 19:23:59 crc kubenswrapper[4828]: I1205 19:23:59.712490 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b878f489-x2jpv"] Dec 05 19:24:00 crc kubenswrapper[4828]: I1205 19:24:00.494168 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17b32c97-cacb-4398-a4f9-7149fcfa178d" path="/var/lib/kubelet/pods/17b32c97-cacb-4398-a4f9-7149fcfa178d/volumes" Dec 05 19:24:00 crc kubenswrapper[4828]: I1205 19:24:00.571325 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b878f489-x2jpv" event={"ID":"ada41f83-4947-4d14-a1c1-c1dd44f7d656","Type":"ContainerStarted","Data":"a1622b9715999824e58e4dd7f6b8c987ec97012e8cf5df82182203cf70f8cbe6"} Dec 05 19:24:01 crc kubenswrapper[4828]: I1205 19:24:01.583980 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a","Type":"ContainerStarted","Data":"b0b30dcb799d1e28511705e86707e49cb8e488eafbc49c607ffeb844d329a9e3"} Dec 05 19:24:02 crc kubenswrapper[4828]: I1205 19:24:02.599993 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8cf7f90d-1042-41f7-b072-57794b005f3d","Type":"ContainerStarted","Data":"2b7e37af53ad45e1d5d49b4c13489828e428a80aa190ebc46de259368bbf9e01"} Dec 05 19:24:02 crc kubenswrapper[4828]: I1205 19:24:02.600147 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="69ffb2d8-a916-4ab5-bf26-1404e5e0c98a" containerName="glance-log" containerID="cri-o://538977d0b7d9845f46256cecc5720933660f4b760c76688a4b1246a27ff6ba0d" gracePeriod=30 Dec 05 19:24:02 crc kubenswrapper[4828]: I1205 19:24:02.600219 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="69ffb2d8-a916-4ab5-bf26-1404e5e0c98a" containerName="glance-httpd" containerID="cri-o://b0b30dcb799d1e28511705e86707e49cb8e488eafbc49c607ffeb844d329a9e3" gracePeriod=30 Dec 05 19:24:02 crc kubenswrapper[4828]: I1205 19:24:02.716231 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.716204189 podStartE2EDuration="7.716204189s" podCreationTimestamp="2025-12-05 19:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:24:02.70750325 +0000 UTC m=+1220.602725586" watchObservedRunningTime="2025-12-05 19:24:02.716204189 +0000 UTC m=+1220.611426495" Dec 05 19:24:03 crc kubenswrapper[4828]: I1205 19:24:03.610793 4828 generic.go:334] "Generic (PLEG): container finished" podID="69ffb2d8-a916-4ab5-bf26-1404e5e0c98a" containerID="b0b30dcb799d1e28511705e86707e49cb8e488eafbc49c607ffeb844d329a9e3" exitCode=0 Dec 05 19:24:03 crc kubenswrapper[4828]: I1205 19:24:03.611123 4828 generic.go:334] "Generic (PLEG): container finished" podID="69ffb2d8-a916-4ab5-bf26-1404e5e0c98a" containerID="538977d0b7d9845f46256cecc5720933660f4b760c76688a4b1246a27ff6ba0d" exitCode=143 Dec 05 19:24:03 crc kubenswrapper[4828]: I1205 19:24:03.611267 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8cf7f90d-1042-41f7-b072-57794b005f3d" containerName="glance-log" containerID="cri-o://b17708d34872bb0deb8095e27e925a5e14b1b238a131014e69b851fcd20bc402" gracePeriod=30 Dec 05 19:24:03 crc kubenswrapper[4828]: I1205 19:24:03.611609 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a","Type":"ContainerDied","Data":"b0b30dcb799d1e28511705e86707e49cb8e488eafbc49c607ffeb844d329a9e3"} Dec 05 19:24:03 crc kubenswrapper[4828]: I1205 19:24:03.611640 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a","Type":"ContainerDied","Data":"538977d0b7d9845f46256cecc5720933660f4b760c76688a4b1246a27ff6ba0d"} Dec 05 19:24:03 crc kubenswrapper[4828]: I1205 19:24:03.611981 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8cf7f90d-1042-41f7-b072-57794b005f3d" containerName="glance-httpd" containerID="cri-o://2b7e37af53ad45e1d5d49b4c13489828e428a80aa190ebc46de259368bbf9e01" gracePeriod=30 Dec 05 19:24:03 crc kubenswrapper[4828]: I1205 19:24:03.634233 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.634210464 podStartE2EDuration="7.634210464s" podCreationTimestamp="2025-12-05 19:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:24:03.630473932 +0000 UTC m=+1221.525696258" watchObservedRunningTime="2025-12-05 19:24:03.634210464 +0000 UTC m=+1221.529432780" Dec 05 19:24:04 crc kubenswrapper[4828]: I1205 19:24:04.620866 4828 generic.go:334] "Generic (PLEG): container finished" podID="8cf7f90d-1042-41f7-b072-57794b005f3d" containerID="2b7e37af53ad45e1d5d49b4c13489828e428a80aa190ebc46de259368bbf9e01" exitCode=0 Dec 05 19:24:04 crc kubenswrapper[4828]: I1205 19:24:04.620935 4828 generic.go:334] "Generic (PLEG): container finished" podID="8cf7f90d-1042-41f7-b072-57794b005f3d" containerID="b17708d34872bb0deb8095e27e925a5e14b1b238a131014e69b851fcd20bc402" exitCode=143 Dec 05 19:24:04 crc kubenswrapper[4828]: I1205 19:24:04.620961 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8cf7f90d-1042-41f7-b072-57794b005f3d","Type":"ContainerDied","Data":"2b7e37af53ad45e1d5d49b4c13489828e428a80aa190ebc46de259368bbf9e01"} Dec 05 19:24:04 crc kubenswrapper[4828]: I1205 19:24:04.620990 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8cf7f90d-1042-41f7-b072-57794b005f3d","Type":"ContainerDied","Data":"b17708d34872bb0deb8095e27e925a5e14b1b238a131014e69b851fcd20bc402"} Dec 05 19:24:05 crc kubenswrapper[4828]: I1205 19:24:05.637396 4828 generic.go:334] "Generic (PLEG): container finished" podID="a4e0e250-719c-4202-b157-e8af7f3a4441" containerID="0262f380f91c94c1037da4a9a9d49de5b716f87161a7cf28c03238b52e61335a" exitCode=0 Dec 05 19:24:05 crc kubenswrapper[4828]: I1205 19:24:05.637641 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7p8cq" event={"ID":"a4e0e250-719c-4202-b157-e8af7f3a4441","Type":"ContainerDied","Data":"0262f380f91c94c1037da4a9a9d49de5b716f87161a7cf28c03238b52e61335a"} Dec 05 19:24:05 crc kubenswrapper[4828]: I1205 19:24:05.749200 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-86bd78dcb9-8zs7w"] Dec 05 19:24:05 crc kubenswrapper[4828]: I1205 19:24:05.800066 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-699b69c564-442lb"] Dec 05 19:24:05 crc kubenswrapper[4828]: E1205 19:24:05.801458 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17b32c97-cacb-4398-a4f9-7149fcfa178d" containerName="init" Dec 05 19:24:05 crc kubenswrapper[4828]: I1205 19:24:05.801485 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="17b32c97-cacb-4398-a4f9-7149fcfa178d" containerName="init" Dec 05 19:24:05 crc kubenswrapper[4828]: I1205 19:24:05.801815 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="17b32c97-cacb-4398-a4f9-7149fcfa178d" containerName="init" Dec 05 19:24:05 crc kubenswrapper[4828]: I1205 19:24:05.803651 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-699b69c564-442lb" Dec 05 19:24:05 crc kubenswrapper[4828]: I1205 19:24:05.814754 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Dec 05 19:24:05 crc kubenswrapper[4828]: I1205 19:24:05.823793 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-699b69c564-442lb"] Dec 05 19:24:05 crc kubenswrapper[4828]: I1205 19:24:05.892201 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-b878f489-x2jpv"] Dec 05 19:24:05 crc kubenswrapper[4828]: I1205 19:24:05.917061 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-594b9fb44-r9zh6"] Dec 05 19:24:05 crc kubenswrapper[4828]: I1205 19:24:05.919452 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-594b9fb44-r9zh6" Dec 05 19:24:05 crc kubenswrapper[4828]: I1205 19:24:05.929644 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-594b9fb44-r9zh6"] Dec 05 19:24:05 crc kubenswrapper[4828]: I1205 19:24:05.959533 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-logs\") pod \"horizon-699b69c564-442lb\" (UID: \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\") " pod="openstack/horizon-699b69c564-442lb" Dec 05 19:24:05 crc kubenswrapper[4828]: I1205 19:24:05.959616 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-scripts\") pod \"horizon-699b69c564-442lb\" (UID: \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\") " pod="openstack/horizon-699b69c564-442lb" Dec 05 19:24:05 crc kubenswrapper[4828]: I1205 19:24:05.959645 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-horizon-secret-key\") pod \"horizon-699b69c564-442lb\" (UID: \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\") " pod="openstack/horizon-699b69c564-442lb" Dec 05 19:24:05 crc kubenswrapper[4828]: I1205 19:24:05.959716 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjz7z\" (UniqueName: \"kubernetes.io/projected/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-kube-api-access-gjz7z\") pod \"horizon-699b69c564-442lb\" (UID: \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\") " pod="openstack/horizon-699b69c564-442lb" Dec 05 19:24:05 crc kubenswrapper[4828]: I1205 19:24:05.960018 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-horizon-tls-certs\") pod \"horizon-699b69c564-442lb\" (UID: \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\") " pod="openstack/horizon-699b69c564-442lb" Dec 05 19:24:05 crc kubenswrapper[4828]: I1205 19:24:05.960095 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-config-data\") pod \"horizon-699b69c564-442lb\" (UID: \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\") " pod="openstack/horizon-699b69c564-442lb" Dec 05 19:24:05 crc kubenswrapper[4828]: I1205 19:24:05.960146 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-combined-ca-bundle\") pod \"horizon-699b69c564-442lb\" (UID: \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\") " pod="openstack/horizon-699b69c564-442lb" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.062460 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-horizon-tls-certs\") pod \"horizon-699b69c564-442lb\" (UID: \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\") " pod="openstack/horizon-699b69c564-442lb" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.062507 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-config-data\") pod \"horizon-699b69c564-442lb\" (UID: \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\") " pod="openstack/horizon-699b69c564-442lb" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.062532 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-combined-ca-bundle\") pod \"horizon-699b69c564-442lb\" (UID: \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\") " pod="openstack/horizon-699b69c564-442lb" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.062551 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-logs\") pod \"horizon-699b69c564-442lb\" (UID: \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\") " pod="openstack/horizon-699b69c564-442lb" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.062574 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99c01665-feb9-49f7-a97a-b6e6d87dc991-scripts\") pod \"horizon-594b9fb44-r9zh6\" (UID: \"99c01665-feb9-49f7-a97a-b6e6d87dc991\") " pod="openstack/horizon-594b9fb44-r9zh6" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.062608 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/99c01665-feb9-49f7-a97a-b6e6d87dc991-config-data\") pod \"horizon-594b9fb44-r9zh6\" (UID: \"99c01665-feb9-49f7-a97a-b6e6d87dc991\") " pod="openstack/horizon-594b9fb44-r9zh6" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.062636 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-scripts\") pod \"horizon-699b69c564-442lb\" (UID: \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\") " pod="openstack/horizon-699b69c564-442lb" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.062658 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-horizon-secret-key\") pod \"horizon-699b69c564-442lb\" (UID: \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\") " pod="openstack/horizon-699b69c564-442lb" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.062674 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/99c01665-feb9-49f7-a97a-b6e6d87dc991-horizon-tls-certs\") pod \"horizon-594b9fb44-r9zh6\" (UID: \"99c01665-feb9-49f7-a97a-b6e6d87dc991\") " pod="openstack/horizon-594b9fb44-r9zh6" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.062699 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/99c01665-feb9-49f7-a97a-b6e6d87dc991-horizon-secret-key\") pod \"horizon-594b9fb44-r9zh6\" (UID: \"99c01665-feb9-49f7-a97a-b6e6d87dc991\") " pod="openstack/horizon-594b9fb44-r9zh6" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.062724 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99c01665-feb9-49f7-a97a-b6e6d87dc991-combined-ca-bundle\") pod \"horizon-594b9fb44-r9zh6\" (UID: \"99c01665-feb9-49f7-a97a-b6e6d87dc991\") " pod="openstack/horizon-594b9fb44-r9zh6" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.062740 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgn78\" (UniqueName: \"kubernetes.io/projected/99c01665-feb9-49f7-a97a-b6e6d87dc991-kube-api-access-dgn78\") pod \"horizon-594b9fb44-r9zh6\" (UID: \"99c01665-feb9-49f7-a97a-b6e6d87dc991\") " pod="openstack/horizon-594b9fb44-r9zh6" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.062755 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99c01665-feb9-49f7-a97a-b6e6d87dc991-logs\") pod \"horizon-594b9fb44-r9zh6\" (UID: \"99c01665-feb9-49f7-a97a-b6e6d87dc991\") " pod="openstack/horizon-594b9fb44-r9zh6" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.062783 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjz7z\" (UniqueName: \"kubernetes.io/projected/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-kube-api-access-gjz7z\") pod \"horizon-699b69c564-442lb\" (UID: \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\") " pod="openstack/horizon-699b69c564-442lb" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.063415 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-logs\") pod \"horizon-699b69c564-442lb\" (UID: \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\") " pod="openstack/horizon-699b69c564-442lb" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.064009 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-config-data\") pod \"horizon-699b69c564-442lb\" (UID: \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\") " pod="openstack/horizon-699b69c564-442lb" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.064023 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-scripts\") pod \"horizon-699b69c564-442lb\" (UID: \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\") " pod="openstack/horizon-699b69c564-442lb" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.071784 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-horizon-secret-key\") pod \"horizon-699b69c564-442lb\" (UID: \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\") " pod="openstack/horizon-699b69c564-442lb" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.072311 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-horizon-tls-certs\") pod \"horizon-699b69c564-442lb\" (UID: \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\") " pod="openstack/horizon-699b69c564-442lb" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.072876 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-combined-ca-bundle\") pod \"horizon-699b69c564-442lb\" (UID: \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\") " pod="openstack/horizon-699b69c564-442lb" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.086048 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjz7z\" (UniqueName: \"kubernetes.io/projected/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-kube-api-access-gjz7z\") pod \"horizon-699b69c564-442lb\" (UID: \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\") " pod="openstack/horizon-699b69c564-442lb" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.150004 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-699b69c564-442lb" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.163975 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99c01665-feb9-49f7-a97a-b6e6d87dc991-scripts\") pod \"horizon-594b9fb44-r9zh6\" (UID: \"99c01665-feb9-49f7-a97a-b6e6d87dc991\") " pod="openstack/horizon-594b9fb44-r9zh6" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.164055 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/99c01665-feb9-49f7-a97a-b6e6d87dc991-config-data\") pod \"horizon-594b9fb44-r9zh6\" (UID: \"99c01665-feb9-49f7-a97a-b6e6d87dc991\") " pod="openstack/horizon-594b9fb44-r9zh6" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.164102 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/99c01665-feb9-49f7-a97a-b6e6d87dc991-horizon-tls-certs\") pod \"horizon-594b9fb44-r9zh6\" (UID: \"99c01665-feb9-49f7-a97a-b6e6d87dc991\") " pod="openstack/horizon-594b9fb44-r9zh6" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.164137 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/99c01665-feb9-49f7-a97a-b6e6d87dc991-horizon-secret-key\") pod \"horizon-594b9fb44-r9zh6\" (UID: \"99c01665-feb9-49f7-a97a-b6e6d87dc991\") " pod="openstack/horizon-594b9fb44-r9zh6" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.164169 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99c01665-feb9-49f7-a97a-b6e6d87dc991-combined-ca-bundle\") pod \"horizon-594b9fb44-r9zh6\" (UID: \"99c01665-feb9-49f7-a97a-b6e6d87dc991\") " pod="openstack/horizon-594b9fb44-r9zh6" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.164193 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgn78\" (UniqueName: \"kubernetes.io/projected/99c01665-feb9-49f7-a97a-b6e6d87dc991-kube-api-access-dgn78\") pod \"horizon-594b9fb44-r9zh6\" (UID: \"99c01665-feb9-49f7-a97a-b6e6d87dc991\") " pod="openstack/horizon-594b9fb44-r9zh6" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.164215 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99c01665-feb9-49f7-a97a-b6e6d87dc991-logs\") pod \"horizon-594b9fb44-r9zh6\" (UID: \"99c01665-feb9-49f7-a97a-b6e6d87dc991\") " pod="openstack/horizon-594b9fb44-r9zh6" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.164609 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99c01665-feb9-49f7-a97a-b6e6d87dc991-logs\") pod \"horizon-594b9fb44-r9zh6\" (UID: \"99c01665-feb9-49f7-a97a-b6e6d87dc991\") " pod="openstack/horizon-594b9fb44-r9zh6" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.164631 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99c01665-feb9-49f7-a97a-b6e6d87dc991-scripts\") pod \"horizon-594b9fb44-r9zh6\" (UID: \"99c01665-feb9-49f7-a97a-b6e6d87dc991\") " pod="openstack/horizon-594b9fb44-r9zh6" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.165902 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/99c01665-feb9-49f7-a97a-b6e6d87dc991-config-data\") pod \"horizon-594b9fb44-r9zh6\" (UID: \"99c01665-feb9-49f7-a97a-b6e6d87dc991\") " pod="openstack/horizon-594b9fb44-r9zh6" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.169732 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/99c01665-feb9-49f7-a97a-b6e6d87dc991-horizon-tls-certs\") pod \"horizon-594b9fb44-r9zh6\" (UID: \"99c01665-feb9-49f7-a97a-b6e6d87dc991\") " pod="openstack/horizon-594b9fb44-r9zh6" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.170394 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99c01665-feb9-49f7-a97a-b6e6d87dc991-combined-ca-bundle\") pod \"horizon-594b9fb44-r9zh6\" (UID: \"99c01665-feb9-49f7-a97a-b6e6d87dc991\") " pod="openstack/horizon-594b9fb44-r9zh6" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.172280 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/99c01665-feb9-49f7-a97a-b6e6d87dc991-horizon-secret-key\") pod \"horizon-594b9fb44-r9zh6\" (UID: \"99c01665-feb9-49f7-a97a-b6e6d87dc991\") " pod="openstack/horizon-594b9fb44-r9zh6" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.195318 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgn78\" (UniqueName: \"kubernetes.io/projected/99c01665-feb9-49f7-a97a-b6e6d87dc991-kube-api-access-dgn78\") pod \"horizon-594b9fb44-r9zh6\" (UID: \"99c01665-feb9-49f7-a97a-b6e6d87dc991\") " pod="openstack/horizon-594b9fb44-r9zh6" Dec 05 19:24:06 crc kubenswrapper[4828]: I1205 19:24:06.243529 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-594b9fb44-r9zh6" Dec 05 19:24:07 crc kubenswrapper[4828]: I1205 19:24:07.110935 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" Dec 05 19:24:07 crc kubenswrapper[4828]: I1205 19:24:07.178906 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-rf9j4"] Dec 05 19:24:07 crc kubenswrapper[4828]: I1205 19:24:07.180275 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" podUID="398a387b-e59c-486c-a39c-a0e0f45c75a2" containerName="dnsmasq-dns" containerID="cri-o://37bb34b2e7a9f1094c4687bc1df6e79504acda85f6285808deea147aabb59295" gracePeriod=10 Dec 05 19:24:08 crc kubenswrapper[4828]: I1205 19:24:08.501649 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" podUID="398a387b-e59c-486c-a39c-a0e0f45c75a2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: connect: connection refused" Dec 05 19:24:09 crc kubenswrapper[4828]: I1205 19:24:09.669503 4828 generic.go:334] "Generic (PLEG): container finished" podID="398a387b-e59c-486c-a39c-a0e0f45c75a2" containerID="37bb34b2e7a9f1094c4687bc1df6e79504acda85f6285808deea147aabb59295" exitCode=0 Dec 05 19:24:09 crc kubenswrapper[4828]: I1205 19:24:09.669545 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" event={"ID":"398a387b-e59c-486c-a39c-a0e0f45c75a2","Type":"ContainerDied","Data":"37bb34b2e7a9f1094c4687bc1df6e79504acda85f6285808deea147aabb59295"} Dec 05 19:24:10 crc kubenswrapper[4828]: I1205 19:24:10.554901 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7p8cq" Dec 05 19:24:10 crc kubenswrapper[4828]: I1205 19:24:10.645944 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4e0e250-719c-4202-b157-e8af7f3a4441-combined-ca-bundle\") pod \"a4e0e250-719c-4202-b157-e8af7f3a4441\" (UID: \"a4e0e250-719c-4202-b157-e8af7f3a4441\") " Dec 05 19:24:10 crc kubenswrapper[4828]: I1205 19:24:10.646109 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a4e0e250-719c-4202-b157-e8af7f3a4441-fernet-keys\") pod \"a4e0e250-719c-4202-b157-e8af7f3a4441\" (UID: \"a4e0e250-719c-4202-b157-e8af7f3a4441\") " Dec 05 19:24:10 crc kubenswrapper[4828]: I1205 19:24:10.646227 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4e0e250-719c-4202-b157-e8af7f3a4441-config-data\") pod \"a4e0e250-719c-4202-b157-e8af7f3a4441\" (UID: \"a4e0e250-719c-4202-b157-e8af7f3a4441\") " Dec 05 19:24:10 crc kubenswrapper[4828]: I1205 19:24:10.646261 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a4e0e250-719c-4202-b157-e8af7f3a4441-credential-keys\") pod \"a4e0e250-719c-4202-b157-e8af7f3a4441\" (UID: \"a4e0e250-719c-4202-b157-e8af7f3a4441\") " Dec 05 19:24:10 crc kubenswrapper[4828]: I1205 19:24:10.646356 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68hdx\" (UniqueName: \"kubernetes.io/projected/a4e0e250-719c-4202-b157-e8af7f3a4441-kube-api-access-68hdx\") pod \"a4e0e250-719c-4202-b157-e8af7f3a4441\" (UID: \"a4e0e250-719c-4202-b157-e8af7f3a4441\") " Dec 05 19:24:10 crc kubenswrapper[4828]: I1205 19:24:10.646409 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4e0e250-719c-4202-b157-e8af7f3a4441-scripts\") pod \"a4e0e250-719c-4202-b157-e8af7f3a4441\" (UID: \"a4e0e250-719c-4202-b157-e8af7f3a4441\") " Dec 05 19:24:10 crc kubenswrapper[4828]: I1205 19:24:10.653549 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4e0e250-719c-4202-b157-e8af7f3a4441-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a4e0e250-719c-4202-b157-e8af7f3a4441" (UID: "a4e0e250-719c-4202-b157-e8af7f3a4441"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:10 crc kubenswrapper[4828]: I1205 19:24:10.654928 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4e0e250-719c-4202-b157-e8af7f3a4441-kube-api-access-68hdx" (OuterVolumeSpecName: "kube-api-access-68hdx") pod "a4e0e250-719c-4202-b157-e8af7f3a4441" (UID: "a4e0e250-719c-4202-b157-e8af7f3a4441"). InnerVolumeSpecName "kube-api-access-68hdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:24:10 crc kubenswrapper[4828]: I1205 19:24:10.674065 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4e0e250-719c-4202-b157-e8af7f3a4441-scripts" (OuterVolumeSpecName: "scripts") pod "a4e0e250-719c-4202-b157-e8af7f3a4441" (UID: "a4e0e250-719c-4202-b157-e8af7f3a4441"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:10 crc kubenswrapper[4828]: I1205 19:24:10.675574 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4e0e250-719c-4202-b157-e8af7f3a4441-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a4e0e250-719c-4202-b157-e8af7f3a4441" (UID: "a4e0e250-719c-4202-b157-e8af7f3a4441"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:10 crc kubenswrapper[4828]: I1205 19:24:10.684554 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7p8cq" event={"ID":"a4e0e250-719c-4202-b157-e8af7f3a4441","Type":"ContainerDied","Data":"bb954492090a8063bd6887f6195852de71c98526db509f329167f20cdbb0dccf"} Dec 05 19:24:10 crc kubenswrapper[4828]: I1205 19:24:10.684594 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb954492090a8063bd6887f6195852de71c98526db509f329167f20cdbb0dccf" Dec 05 19:24:10 crc kubenswrapper[4828]: I1205 19:24:10.684653 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7p8cq" Dec 05 19:24:10 crc kubenswrapper[4828]: I1205 19:24:10.698572 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4e0e250-719c-4202-b157-e8af7f3a4441-config-data" (OuterVolumeSpecName: "config-data") pod "a4e0e250-719c-4202-b157-e8af7f3a4441" (UID: "a4e0e250-719c-4202-b157-e8af7f3a4441"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:10 crc kubenswrapper[4828]: I1205 19:24:10.700839 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4e0e250-719c-4202-b157-e8af7f3a4441-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4e0e250-719c-4202-b157-e8af7f3a4441" (UID: "a4e0e250-719c-4202-b157-e8af7f3a4441"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:10 crc kubenswrapper[4828]: I1205 19:24:10.749359 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4e0e250-719c-4202-b157-e8af7f3a4441-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:10 crc kubenswrapper[4828]: I1205 19:24:10.749408 4828 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a4e0e250-719c-4202-b157-e8af7f3a4441-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:10 crc kubenswrapper[4828]: I1205 19:24:10.749427 4828 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a4e0e250-719c-4202-b157-e8af7f3a4441-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:10 crc kubenswrapper[4828]: I1205 19:24:10.749446 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4e0e250-719c-4202-b157-e8af7f3a4441-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:10 crc kubenswrapper[4828]: I1205 19:24:10.749466 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68hdx\" (UniqueName: \"kubernetes.io/projected/a4e0e250-719c-4202-b157-e8af7f3a4441-kube-api-access-68hdx\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:10 crc kubenswrapper[4828]: I1205 19:24:10.749486 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4e0e250-719c-4202-b157-e8af7f3a4441-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:11 crc kubenswrapper[4828]: I1205 19:24:11.646653 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-7p8cq"] Dec 05 19:24:11 crc kubenswrapper[4828]: I1205 19:24:11.654234 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-7p8cq"] Dec 05 19:24:11 crc kubenswrapper[4828]: I1205 19:24:11.739068 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-v6w4g"] Dec 05 19:24:11 crc kubenswrapper[4828]: E1205 19:24:11.739424 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4e0e250-719c-4202-b157-e8af7f3a4441" containerName="keystone-bootstrap" Dec 05 19:24:11 crc kubenswrapper[4828]: I1205 19:24:11.739437 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4e0e250-719c-4202-b157-e8af7f3a4441" containerName="keystone-bootstrap" Dec 05 19:24:11 crc kubenswrapper[4828]: I1205 19:24:11.739606 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4e0e250-719c-4202-b157-e8af7f3a4441" containerName="keystone-bootstrap" Dec 05 19:24:11 crc kubenswrapper[4828]: I1205 19:24:11.740400 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v6w4g" Dec 05 19:24:11 crc kubenswrapper[4828]: I1205 19:24:11.742361 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 05 19:24:11 crc kubenswrapper[4828]: I1205 19:24:11.744714 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 05 19:24:11 crc kubenswrapper[4828]: I1205 19:24:11.745080 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7rdh4" Dec 05 19:24:11 crc kubenswrapper[4828]: I1205 19:24:11.745311 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 05 19:24:11 crc kubenswrapper[4828]: I1205 19:24:11.745791 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 05 19:24:11 crc kubenswrapper[4828]: I1205 19:24:11.753761 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-v6w4g"] Dec 05 19:24:11 crc kubenswrapper[4828]: I1205 19:24:11.885309 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08bf11d8-3591-4229-be9a-ce8b8e709739-config-data\") pod \"keystone-bootstrap-v6w4g\" (UID: \"08bf11d8-3591-4229-be9a-ce8b8e709739\") " pod="openstack/keystone-bootstrap-v6w4g" Dec 05 19:24:11 crc kubenswrapper[4828]: I1205 19:24:11.885388 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08bf11d8-3591-4229-be9a-ce8b8e709739-combined-ca-bundle\") pod \"keystone-bootstrap-v6w4g\" (UID: \"08bf11d8-3591-4229-be9a-ce8b8e709739\") " pod="openstack/keystone-bootstrap-v6w4g" Dec 05 19:24:11 crc kubenswrapper[4828]: I1205 19:24:11.885452 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/08bf11d8-3591-4229-be9a-ce8b8e709739-fernet-keys\") pod \"keystone-bootstrap-v6w4g\" (UID: \"08bf11d8-3591-4229-be9a-ce8b8e709739\") " pod="openstack/keystone-bootstrap-v6w4g" Dec 05 19:24:11 crc kubenswrapper[4828]: I1205 19:24:11.885480 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/08bf11d8-3591-4229-be9a-ce8b8e709739-credential-keys\") pod \"keystone-bootstrap-v6w4g\" (UID: \"08bf11d8-3591-4229-be9a-ce8b8e709739\") " pod="openstack/keystone-bootstrap-v6w4g" Dec 05 19:24:11 crc kubenswrapper[4828]: I1205 19:24:11.885509 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pjmw\" (UniqueName: \"kubernetes.io/projected/08bf11d8-3591-4229-be9a-ce8b8e709739-kube-api-access-5pjmw\") pod \"keystone-bootstrap-v6w4g\" (UID: \"08bf11d8-3591-4229-be9a-ce8b8e709739\") " pod="openstack/keystone-bootstrap-v6w4g" Dec 05 19:24:11 crc kubenswrapper[4828]: I1205 19:24:11.885526 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08bf11d8-3591-4229-be9a-ce8b8e709739-scripts\") pod \"keystone-bootstrap-v6w4g\" (UID: \"08bf11d8-3591-4229-be9a-ce8b8e709739\") " pod="openstack/keystone-bootstrap-v6w4g" Dec 05 19:24:11 crc kubenswrapper[4828]: I1205 19:24:11.987208 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/08bf11d8-3591-4229-be9a-ce8b8e709739-fernet-keys\") pod \"keystone-bootstrap-v6w4g\" (UID: \"08bf11d8-3591-4229-be9a-ce8b8e709739\") " pod="openstack/keystone-bootstrap-v6w4g" Dec 05 19:24:11 crc kubenswrapper[4828]: I1205 19:24:11.987269 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/08bf11d8-3591-4229-be9a-ce8b8e709739-credential-keys\") pod \"keystone-bootstrap-v6w4g\" (UID: \"08bf11d8-3591-4229-be9a-ce8b8e709739\") " pod="openstack/keystone-bootstrap-v6w4g" Dec 05 19:24:11 crc kubenswrapper[4828]: I1205 19:24:11.987306 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pjmw\" (UniqueName: \"kubernetes.io/projected/08bf11d8-3591-4229-be9a-ce8b8e709739-kube-api-access-5pjmw\") pod \"keystone-bootstrap-v6w4g\" (UID: \"08bf11d8-3591-4229-be9a-ce8b8e709739\") " pod="openstack/keystone-bootstrap-v6w4g" Dec 05 19:24:11 crc kubenswrapper[4828]: I1205 19:24:11.987330 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08bf11d8-3591-4229-be9a-ce8b8e709739-scripts\") pod \"keystone-bootstrap-v6w4g\" (UID: \"08bf11d8-3591-4229-be9a-ce8b8e709739\") " pod="openstack/keystone-bootstrap-v6w4g" Dec 05 19:24:11 crc kubenswrapper[4828]: I1205 19:24:11.987396 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08bf11d8-3591-4229-be9a-ce8b8e709739-config-data\") pod \"keystone-bootstrap-v6w4g\" (UID: \"08bf11d8-3591-4229-be9a-ce8b8e709739\") " pod="openstack/keystone-bootstrap-v6w4g" Dec 05 19:24:11 crc kubenswrapper[4828]: I1205 19:24:11.987439 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08bf11d8-3591-4229-be9a-ce8b8e709739-combined-ca-bundle\") pod \"keystone-bootstrap-v6w4g\" (UID: \"08bf11d8-3591-4229-be9a-ce8b8e709739\") " pod="openstack/keystone-bootstrap-v6w4g" Dec 05 19:24:11 crc kubenswrapper[4828]: I1205 19:24:11.993191 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08bf11d8-3591-4229-be9a-ce8b8e709739-scripts\") pod \"keystone-bootstrap-v6w4g\" (UID: \"08bf11d8-3591-4229-be9a-ce8b8e709739\") " pod="openstack/keystone-bootstrap-v6w4g" Dec 05 19:24:11 crc kubenswrapper[4828]: I1205 19:24:11.996380 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/08bf11d8-3591-4229-be9a-ce8b8e709739-fernet-keys\") pod \"keystone-bootstrap-v6w4g\" (UID: \"08bf11d8-3591-4229-be9a-ce8b8e709739\") " pod="openstack/keystone-bootstrap-v6w4g" Dec 05 19:24:11 crc kubenswrapper[4828]: I1205 19:24:11.997236 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/08bf11d8-3591-4229-be9a-ce8b8e709739-credential-keys\") pod \"keystone-bootstrap-v6w4g\" (UID: \"08bf11d8-3591-4229-be9a-ce8b8e709739\") " pod="openstack/keystone-bootstrap-v6w4g" Dec 05 19:24:11 crc kubenswrapper[4828]: I1205 19:24:11.997424 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08bf11d8-3591-4229-be9a-ce8b8e709739-config-data\") pod \"keystone-bootstrap-v6w4g\" (UID: \"08bf11d8-3591-4229-be9a-ce8b8e709739\") " pod="openstack/keystone-bootstrap-v6w4g" Dec 05 19:24:11 crc kubenswrapper[4828]: I1205 19:24:11.999432 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08bf11d8-3591-4229-be9a-ce8b8e709739-combined-ca-bundle\") pod \"keystone-bootstrap-v6w4g\" (UID: \"08bf11d8-3591-4229-be9a-ce8b8e709739\") " pod="openstack/keystone-bootstrap-v6w4g" Dec 05 19:24:12 crc kubenswrapper[4828]: I1205 19:24:12.011013 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pjmw\" (UniqueName: \"kubernetes.io/projected/08bf11d8-3591-4229-be9a-ce8b8e709739-kube-api-access-5pjmw\") pod \"keystone-bootstrap-v6w4g\" (UID: \"08bf11d8-3591-4229-be9a-ce8b8e709739\") " pod="openstack/keystone-bootstrap-v6w4g" Dec 05 19:24:12 crc kubenswrapper[4828]: I1205 19:24:12.096996 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v6w4g" Dec 05 19:24:12 crc kubenswrapper[4828]: I1205 19:24:12.484932 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4e0e250-719c-4202-b157-e8af7f3a4441" path="/var/lib/kubelet/pods/a4e0e250-719c-4202-b157-e8af7f3a4441/volumes" Dec 05 19:24:13 crc kubenswrapper[4828]: I1205 19:24:13.500750 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" podUID="398a387b-e59c-486c-a39c-a0e0f45c75a2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: connect: connection refused" Dec 05 19:24:14 crc kubenswrapper[4828]: E1205 19:24:14.573773 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 05 19:24:14 crc kubenswrapper[4828]: E1205 19:24:14.574006 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h58bh668h94h578h7ch5ffh545h599h5f5h98hf4h68ch65bhb8h657h654h647h67bh6bhb4h586hbch9fhch5f5hb4hffhffh5f8h557q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6mzq5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-86bd78dcb9-8zs7w_openstack(097fc905-c9d4-49b9-a454-2aaa1b8ad22f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 19:24:14 crc kubenswrapper[4828]: E1205 19:24:14.576964 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-86bd78dcb9-8zs7w" podUID="097fc905-c9d4-49b9-a454-2aaa1b8ad22f" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.468994 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.571836 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.571897 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-httpd-run\") pod \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.571984 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-logs\") pod \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.572163 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-scripts\") pod \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.572280 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-logs" (OuterVolumeSpecName: "logs") pod "69ffb2d8-a916-4ab5-bf26-1404e5e0c98a" (UID: "69ffb2d8-a916-4ab5-bf26-1404e5e0c98a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.572324 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "69ffb2d8-a916-4ab5-bf26-1404e5e0c98a" (UID: "69ffb2d8-a916-4ab5-bf26-1404e5e0c98a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.572685 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzgjt\" (UniqueName: \"kubernetes.io/projected/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-kube-api-access-kzgjt\") pod \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.572755 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-config-data\") pod \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.572789 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-public-tls-certs\") pod \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.572814 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-combined-ca-bundle\") pod \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\" (UID: \"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a\") " Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.574217 4828 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.574238 4828 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-logs\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.578071 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "69ffb2d8-a916-4ab5-bf26-1404e5e0c98a" (UID: "69ffb2d8-a916-4ab5-bf26-1404e5e0c98a"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.578155 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-scripts" (OuterVolumeSpecName: "scripts") pod "69ffb2d8-a916-4ab5-bf26-1404e5e0c98a" (UID: "69ffb2d8-a916-4ab5-bf26-1404e5e0c98a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.593141 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-kube-api-access-kzgjt" (OuterVolumeSpecName: "kube-api-access-kzgjt") pod "69ffb2d8-a916-4ab5-bf26-1404e5e0c98a" (UID: "69ffb2d8-a916-4ab5-bf26-1404e5e0c98a"). InnerVolumeSpecName "kube-api-access-kzgjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.610254 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69ffb2d8-a916-4ab5-bf26-1404e5e0c98a" (UID: "69ffb2d8-a916-4ab5-bf26-1404e5e0c98a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.619481 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "69ffb2d8-a916-4ab5-bf26-1404e5e0c98a" (UID: "69ffb2d8-a916-4ab5-bf26-1404e5e0c98a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.624223 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-config-data" (OuterVolumeSpecName: "config-data") pod "69ffb2d8-a916-4ab5-bf26-1404e5e0c98a" (UID: "69ffb2d8-a916-4ab5-bf26-1404e5e0c98a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.676161 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.676202 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzgjt\" (UniqueName: \"kubernetes.io/projected/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-kube-api-access-kzgjt\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.676218 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.676231 4828 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.676244 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.676285 4828 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.714092 4828 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.739509 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"69ffb2d8-a916-4ab5-bf26-1404e5e0c98a","Type":"ContainerDied","Data":"e41d75ee5409c20bd94f68c230c9c94b9492cea59fc5c4b19bbd82b0701fba2e"} Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.739563 4828 scope.go:117] "RemoveContainer" containerID="b0b30dcb799d1e28511705e86707e49cb8e488eafbc49c607ffeb844d329a9e3" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.739756 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.777405 4828 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.778571 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.784347 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.813687 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 19:24:17 crc kubenswrapper[4828]: E1205 19:24:16.814188 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69ffb2d8-a916-4ab5-bf26-1404e5e0c98a" containerName="glance-log" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.814204 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="69ffb2d8-a916-4ab5-bf26-1404e5e0c98a" containerName="glance-log" Dec 05 19:24:17 crc kubenswrapper[4828]: E1205 19:24:16.814228 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69ffb2d8-a916-4ab5-bf26-1404e5e0c98a" containerName="glance-httpd" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.814236 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="69ffb2d8-a916-4ab5-bf26-1404e5e0c98a" containerName="glance-httpd" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.814445 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="69ffb2d8-a916-4ab5-bf26-1404e5e0c98a" containerName="glance-httpd" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.814471 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="69ffb2d8-a916-4ab5-bf26-1404e5e0c98a" containerName="glance-log" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.815594 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.821388 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.837127 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.838604 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.981250 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59e69475-93aa-4875-8997-7cfa85de4b75-scripts\") pod \"glance-default-external-api-0\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " pod="openstack/glance-default-external-api-0" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.981352 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59e69475-93aa-4875-8997-7cfa85de4b75-config-data\") pod \"glance-default-external-api-0\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " pod="openstack/glance-default-external-api-0" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.981382 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59e69475-93aa-4875-8997-7cfa85de4b75-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " pod="openstack/glance-default-external-api-0" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.981406 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59e69475-93aa-4875-8997-7cfa85de4b75-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " pod="openstack/glance-default-external-api-0" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.981441 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " pod="openstack/glance-default-external-api-0" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.981472 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs2cw\" (UniqueName: \"kubernetes.io/projected/59e69475-93aa-4875-8997-7cfa85de4b75-kube-api-access-cs2cw\") pod \"glance-default-external-api-0\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " pod="openstack/glance-default-external-api-0" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.981489 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59e69475-93aa-4875-8997-7cfa85de4b75-logs\") pod \"glance-default-external-api-0\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " pod="openstack/glance-default-external-api-0" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:16.981508 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59e69475-93aa-4875-8997-7cfa85de4b75-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " pod="openstack/glance-default-external-api-0" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:17.082684 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59e69475-93aa-4875-8997-7cfa85de4b75-logs\") pod \"glance-default-external-api-0\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " pod="openstack/glance-default-external-api-0" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:17.082740 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59e69475-93aa-4875-8997-7cfa85de4b75-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " pod="openstack/glance-default-external-api-0" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:17.082821 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59e69475-93aa-4875-8997-7cfa85de4b75-scripts\") pod \"glance-default-external-api-0\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " pod="openstack/glance-default-external-api-0" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:17.082958 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59e69475-93aa-4875-8997-7cfa85de4b75-config-data\") pod \"glance-default-external-api-0\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " pod="openstack/glance-default-external-api-0" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:17.082993 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59e69475-93aa-4875-8997-7cfa85de4b75-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " pod="openstack/glance-default-external-api-0" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:17.083027 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59e69475-93aa-4875-8997-7cfa85de4b75-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " pod="openstack/glance-default-external-api-0" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:17.083067 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " pod="openstack/glance-default-external-api-0" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:17.083110 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs2cw\" (UniqueName: \"kubernetes.io/projected/59e69475-93aa-4875-8997-7cfa85de4b75-kube-api-access-cs2cw\") pod \"glance-default-external-api-0\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " pod="openstack/glance-default-external-api-0" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:17.083550 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59e69475-93aa-4875-8997-7cfa85de4b75-logs\") pod \"glance-default-external-api-0\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " pod="openstack/glance-default-external-api-0" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:17.085387 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59e69475-93aa-4875-8997-7cfa85de4b75-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " pod="openstack/glance-default-external-api-0" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:17.085526 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:17.087876 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59e69475-93aa-4875-8997-7cfa85de4b75-scripts\") pod \"glance-default-external-api-0\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " pod="openstack/glance-default-external-api-0" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:17.091350 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59e69475-93aa-4875-8997-7cfa85de4b75-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " pod="openstack/glance-default-external-api-0" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:17.091477 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59e69475-93aa-4875-8997-7cfa85de4b75-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " pod="openstack/glance-default-external-api-0" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:17.092419 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59e69475-93aa-4875-8997-7cfa85de4b75-config-data\") pod \"glance-default-external-api-0\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " pod="openstack/glance-default-external-api-0" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:17.109401 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs2cw\" (UniqueName: \"kubernetes.io/projected/59e69475-93aa-4875-8997-7cfa85de4b75-kube-api-access-cs2cw\") pod \"glance-default-external-api-0\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " pod="openstack/glance-default-external-api-0" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:17.123513 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " pod="openstack/glance-default-external-api-0" Dec 05 19:24:17 crc kubenswrapper[4828]: I1205 19:24:17.168098 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 19:24:18 crc kubenswrapper[4828]: I1205 19:24:18.456544 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69ffb2d8-a916-4ab5-bf26-1404e5e0c98a" path="/var/lib/kubelet/pods/69ffb2d8-a916-4ab5-bf26-1404e5e0c98a/volumes" Dec 05 19:24:23 crc kubenswrapper[4828]: I1205 19:24:23.501736 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" podUID="398a387b-e59c-486c-a39c-a0e0f45c75a2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: i/o timeout" Dec 05 19:24:23 crc kubenswrapper[4828]: I1205 19:24:23.502904 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.045020 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.106917 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cf7f90d-1042-41f7-b072-57794b005f3d-scripts\") pod \"8cf7f90d-1042-41f7-b072-57794b005f3d\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.107002 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cf7f90d-1042-41f7-b072-57794b005f3d-logs\") pod \"8cf7f90d-1042-41f7-b072-57794b005f3d\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.107068 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cf7f90d-1042-41f7-b072-57794b005f3d-combined-ca-bundle\") pod \"8cf7f90d-1042-41f7-b072-57794b005f3d\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.107170 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cf7f90d-1042-41f7-b072-57794b005f3d-config-data\") pod \"8cf7f90d-1042-41f7-b072-57794b005f3d\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.107220 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8cf7f90d-1042-41f7-b072-57794b005f3d-httpd-run\") pod \"8cf7f90d-1042-41f7-b072-57794b005f3d\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.107266 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cf7f90d-1042-41f7-b072-57794b005f3d-internal-tls-certs\") pod \"8cf7f90d-1042-41f7-b072-57794b005f3d\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.107352 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"8cf7f90d-1042-41f7-b072-57794b005f3d\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.107525 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-446hx\" (UniqueName: \"kubernetes.io/projected/8cf7f90d-1042-41f7-b072-57794b005f3d-kube-api-access-446hx\") pod \"8cf7f90d-1042-41f7-b072-57794b005f3d\" (UID: \"8cf7f90d-1042-41f7-b072-57794b005f3d\") " Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.108976 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cf7f90d-1042-41f7-b072-57794b005f3d-logs" (OuterVolumeSpecName: "logs") pod "8cf7f90d-1042-41f7-b072-57794b005f3d" (UID: "8cf7f90d-1042-41f7-b072-57794b005f3d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.109324 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cf7f90d-1042-41f7-b072-57794b005f3d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8cf7f90d-1042-41f7-b072-57794b005f3d" (UID: "8cf7f90d-1042-41f7-b072-57794b005f3d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.113903 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cf7f90d-1042-41f7-b072-57794b005f3d-scripts" (OuterVolumeSpecName: "scripts") pod "8cf7f90d-1042-41f7-b072-57794b005f3d" (UID: "8cf7f90d-1042-41f7-b072-57794b005f3d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.114045 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "8cf7f90d-1042-41f7-b072-57794b005f3d" (UID: "8cf7f90d-1042-41f7-b072-57794b005f3d"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.123143 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cf7f90d-1042-41f7-b072-57794b005f3d-kube-api-access-446hx" (OuterVolumeSpecName: "kube-api-access-446hx") pod "8cf7f90d-1042-41f7-b072-57794b005f3d" (UID: "8cf7f90d-1042-41f7-b072-57794b005f3d"). InnerVolumeSpecName "kube-api-access-446hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.141889 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cf7f90d-1042-41f7-b072-57794b005f3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8cf7f90d-1042-41f7-b072-57794b005f3d" (UID: "8cf7f90d-1042-41f7-b072-57794b005f3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.156715 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cf7f90d-1042-41f7-b072-57794b005f3d-config-data" (OuterVolumeSpecName: "config-data") pod "8cf7f90d-1042-41f7-b072-57794b005f3d" (UID: "8cf7f90d-1042-41f7-b072-57794b005f3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.161022 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cf7f90d-1042-41f7-b072-57794b005f3d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8cf7f90d-1042-41f7-b072-57794b005f3d" (UID: "8cf7f90d-1042-41f7-b072-57794b005f3d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.211465 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cf7f90d-1042-41f7-b072-57794b005f3d-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.211495 4828 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cf7f90d-1042-41f7-b072-57794b005f3d-logs\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.211510 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cf7f90d-1042-41f7-b072-57794b005f3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.211524 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cf7f90d-1042-41f7-b072-57794b005f3d-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.211536 4828 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8cf7f90d-1042-41f7-b072-57794b005f3d-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.211547 4828 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cf7f90d-1042-41f7-b072-57794b005f3d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.211593 4828 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.219425 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-446hx\" (UniqueName: \"kubernetes.io/projected/8cf7f90d-1042-41f7-b072-57794b005f3d-kube-api-access-446hx\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.238629 4828 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.321903 4828 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:24 crc kubenswrapper[4828]: E1205 19:24:24.375508 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Dec 05 19:24:24 crc kubenswrapper[4828]: E1205 19:24:24.375898 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd6h58ch689h5b7h574h599hdch666h94h574h64bh569h659h5hb7h674hbch55hc9h68dh597h64fh9hfbh595h65h6h576h6h597h54bh677q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kw44h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(72755327-9414-46f2-b3ed-d19120b5876e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.405638 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86bd78dcb9-8zs7w" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.524450 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mzq5\" (UniqueName: \"kubernetes.io/projected/097fc905-c9d4-49b9-a454-2aaa1b8ad22f-kube-api-access-6mzq5\") pod \"097fc905-c9d4-49b9-a454-2aaa1b8ad22f\" (UID: \"097fc905-c9d4-49b9-a454-2aaa1b8ad22f\") " Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.524510 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/097fc905-c9d4-49b9-a454-2aaa1b8ad22f-horizon-secret-key\") pod \"097fc905-c9d4-49b9-a454-2aaa1b8ad22f\" (UID: \"097fc905-c9d4-49b9-a454-2aaa1b8ad22f\") " Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.524553 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/097fc905-c9d4-49b9-a454-2aaa1b8ad22f-logs\") pod \"097fc905-c9d4-49b9-a454-2aaa1b8ad22f\" (UID: \"097fc905-c9d4-49b9-a454-2aaa1b8ad22f\") " Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.524594 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/097fc905-c9d4-49b9-a454-2aaa1b8ad22f-config-data\") pod \"097fc905-c9d4-49b9-a454-2aaa1b8ad22f\" (UID: \"097fc905-c9d4-49b9-a454-2aaa1b8ad22f\") " Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.524628 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/097fc905-c9d4-49b9-a454-2aaa1b8ad22f-scripts\") pod \"097fc905-c9d4-49b9-a454-2aaa1b8ad22f\" (UID: \"097fc905-c9d4-49b9-a454-2aaa1b8ad22f\") " Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.525141 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/097fc905-c9d4-49b9-a454-2aaa1b8ad22f-logs" (OuterVolumeSpecName: "logs") pod "097fc905-c9d4-49b9-a454-2aaa1b8ad22f" (UID: "097fc905-c9d4-49b9-a454-2aaa1b8ad22f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.525270 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/097fc905-c9d4-49b9-a454-2aaa1b8ad22f-scripts" (OuterVolumeSpecName: "scripts") pod "097fc905-c9d4-49b9-a454-2aaa1b8ad22f" (UID: "097fc905-c9d4-49b9-a454-2aaa1b8ad22f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.525790 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/097fc905-c9d4-49b9-a454-2aaa1b8ad22f-config-data" (OuterVolumeSpecName: "config-data") pod "097fc905-c9d4-49b9-a454-2aaa1b8ad22f" (UID: "097fc905-c9d4-49b9-a454-2aaa1b8ad22f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.527719 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/097fc905-c9d4-49b9-a454-2aaa1b8ad22f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "097fc905-c9d4-49b9-a454-2aaa1b8ad22f" (UID: "097fc905-c9d4-49b9-a454-2aaa1b8ad22f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.528482 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/097fc905-c9d4-49b9-a454-2aaa1b8ad22f-kube-api-access-6mzq5" (OuterVolumeSpecName: "kube-api-access-6mzq5") pod "097fc905-c9d4-49b9-a454-2aaa1b8ad22f" (UID: "097fc905-c9d4-49b9-a454-2aaa1b8ad22f"). InnerVolumeSpecName "kube-api-access-6mzq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.626522 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/097fc905-c9d4-49b9-a454-2aaa1b8ad22f-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.626554 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mzq5\" (UniqueName: \"kubernetes.io/projected/097fc905-c9d4-49b9-a454-2aaa1b8ad22f-kube-api-access-6mzq5\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.626565 4828 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/097fc905-c9d4-49b9-a454-2aaa1b8ad22f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.626573 4828 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/097fc905-c9d4-49b9-a454-2aaa1b8ad22f-logs\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.626582 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/097fc905-c9d4-49b9-a454-2aaa1b8ad22f-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.804418 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86bd78dcb9-8zs7w" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.804413 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86bd78dcb9-8zs7w" event={"ID":"097fc905-c9d4-49b9-a454-2aaa1b8ad22f","Type":"ContainerDied","Data":"be9edb235e8903407b929df285546f44ee77bbfce17a9f63777d69f627aa1249"} Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.806102 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8cf7f90d-1042-41f7-b072-57794b005f3d","Type":"ContainerDied","Data":"d0b8909e1e778c84688a78483d02a02ad3d696282effecbaddd2adf210b33e68"} Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.806185 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.827870 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.844707 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.865317 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 19:24:24 crc kubenswrapper[4828]: E1205 19:24:24.865757 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cf7f90d-1042-41f7-b072-57794b005f3d" containerName="glance-httpd" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.865773 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cf7f90d-1042-41f7-b072-57794b005f3d" containerName="glance-httpd" Dec 05 19:24:24 crc kubenswrapper[4828]: E1205 19:24:24.865801 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cf7f90d-1042-41f7-b072-57794b005f3d" containerName="glance-log" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.865809 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cf7f90d-1042-41f7-b072-57794b005f3d" containerName="glance-log" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.866023 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cf7f90d-1042-41f7-b072-57794b005f3d" containerName="glance-log" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.866049 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cf7f90d-1042-41f7-b072-57794b005f3d" containerName="glance-httpd" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.867115 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.872236 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.872492 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 05 19:24:24 crc kubenswrapper[4828]: E1205 19:24:24.898058 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Dec 05 19:24:24 crc kubenswrapper[4828]: E1205 19:24:24.898281 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ht28x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-9gl2x_openstack(8d22bcf5-bf39-4595-8742-5d8c3018e7bf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 19:24:24 crc kubenswrapper[4828]: E1205 19:24:24.899451 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-9gl2x" podUID="8d22bcf5-bf39-4595-8742-5d8c3018e7bf" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.916977 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.940117 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.940177 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctxwp\" (UniqueName: \"kubernetes.io/projected/b578dfb2-8a7f-420d-a503-d2eac607b648-kube-api-access-ctxwp\") pod \"glance-default-internal-api-0\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.940309 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b578dfb2-8a7f-420d-a503-d2eac607b648-logs\") pod \"glance-default-internal-api-0\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.940350 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b578dfb2-8a7f-420d-a503-d2eac607b648-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.940414 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b578dfb2-8a7f-420d-a503-d2eac607b648-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.940446 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b578dfb2-8a7f-420d-a503-d2eac607b648-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.940465 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b578dfb2-8a7f-420d-a503-d2eac607b648-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.940481 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.940504 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b578dfb2-8a7f-420d-a503-d2eac607b648-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.952154 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-86bd78dcb9-8zs7w"] Dec 05 19:24:24 crc kubenswrapper[4828]: I1205 19:24:24.964994 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-86bd78dcb9-8zs7w"] Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.041642 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/398a387b-e59c-486c-a39c-a0e0f45c75a2-ovsdbserver-nb\") pod \"398a387b-e59c-486c-a39c-a0e0f45c75a2\" (UID: \"398a387b-e59c-486c-a39c-a0e0f45c75a2\") " Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.041748 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2hth\" (UniqueName: \"kubernetes.io/projected/398a387b-e59c-486c-a39c-a0e0f45c75a2-kube-api-access-s2hth\") pod \"398a387b-e59c-486c-a39c-a0e0f45c75a2\" (UID: \"398a387b-e59c-486c-a39c-a0e0f45c75a2\") " Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.041788 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/398a387b-e59c-486c-a39c-a0e0f45c75a2-dns-svc\") pod \"398a387b-e59c-486c-a39c-a0e0f45c75a2\" (UID: \"398a387b-e59c-486c-a39c-a0e0f45c75a2\") " Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.041867 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/398a387b-e59c-486c-a39c-a0e0f45c75a2-config\") pod \"398a387b-e59c-486c-a39c-a0e0f45c75a2\" (UID: \"398a387b-e59c-486c-a39c-a0e0f45c75a2\") " Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.042242 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/398a387b-e59c-486c-a39c-a0e0f45c75a2-dns-swift-storage-0\") pod \"398a387b-e59c-486c-a39c-a0e0f45c75a2\" (UID: \"398a387b-e59c-486c-a39c-a0e0f45c75a2\") " Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.042293 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/398a387b-e59c-486c-a39c-a0e0f45c75a2-ovsdbserver-sb\") pod \"398a387b-e59c-486c-a39c-a0e0f45c75a2\" (UID: \"398a387b-e59c-486c-a39c-a0e0f45c75a2\") " Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.042625 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b578dfb2-8a7f-420d-a503-d2eac607b648-logs\") pod \"glance-default-internal-api-0\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.042660 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b578dfb2-8a7f-420d-a503-d2eac607b648-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.042725 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b578dfb2-8a7f-420d-a503-d2eac607b648-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.042778 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b578dfb2-8a7f-420d-a503-d2eac607b648-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.042810 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b578dfb2-8a7f-420d-a503-d2eac607b648-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.042866 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b578dfb2-8a7f-420d-a503-d2eac607b648-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.042897 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.042927 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctxwp\" (UniqueName: \"kubernetes.io/projected/b578dfb2-8a7f-420d-a503-d2eac607b648-kube-api-access-ctxwp\") pod \"glance-default-internal-api-0\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.043345 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b578dfb2-8a7f-420d-a503-d2eac607b648-logs\") pod \"glance-default-internal-api-0\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.043364 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b578dfb2-8a7f-420d-a503-d2eac607b648-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.043608 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.050971 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b578dfb2-8a7f-420d-a503-d2eac607b648-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.051066 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b578dfb2-8a7f-420d-a503-d2eac607b648-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.054683 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b578dfb2-8a7f-420d-a503-d2eac607b648-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.054797 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b578dfb2-8a7f-420d-a503-d2eac607b648-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.055191 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/398a387b-e59c-486c-a39c-a0e0f45c75a2-kube-api-access-s2hth" (OuterVolumeSpecName: "kube-api-access-s2hth") pod "398a387b-e59c-486c-a39c-a0e0f45c75a2" (UID: "398a387b-e59c-486c-a39c-a0e0f45c75a2"). InnerVolumeSpecName "kube-api-access-s2hth". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.060077 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctxwp\" (UniqueName: \"kubernetes.io/projected/b578dfb2-8a7f-420d-a503-d2eac607b648-kube-api-access-ctxwp\") pod \"glance-default-internal-api-0\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.103083 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.106163 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/398a387b-e59c-486c-a39c-a0e0f45c75a2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "398a387b-e59c-486c-a39c-a0e0f45c75a2" (UID: "398a387b-e59c-486c-a39c-a0e0f45c75a2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.114974 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/398a387b-e59c-486c-a39c-a0e0f45c75a2-config" (OuterVolumeSpecName: "config") pod "398a387b-e59c-486c-a39c-a0e0f45c75a2" (UID: "398a387b-e59c-486c-a39c-a0e0f45c75a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.118026 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/398a387b-e59c-486c-a39c-a0e0f45c75a2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "398a387b-e59c-486c-a39c-a0e0f45c75a2" (UID: "398a387b-e59c-486c-a39c-a0e0f45c75a2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.130568 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/398a387b-e59c-486c-a39c-a0e0f45c75a2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "398a387b-e59c-486c-a39c-a0e0f45c75a2" (UID: "398a387b-e59c-486c-a39c-a0e0f45c75a2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.145329 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/398a387b-e59c-486c-a39c-a0e0f45c75a2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.145363 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2hth\" (UniqueName: \"kubernetes.io/projected/398a387b-e59c-486c-a39c-a0e0f45c75a2-kube-api-access-s2hth\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.145377 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/398a387b-e59c-486c-a39c-a0e0f45c75a2-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.145388 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/398a387b-e59c-486c-a39c-a0e0f45c75a2-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.145396 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/398a387b-e59c-486c-a39c-a0e0f45c75a2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.149187 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/398a387b-e59c-486c-a39c-a0e0f45c75a2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "398a387b-e59c-486c-a39c-a0e0f45c75a2" (UID: "398a387b-e59c-486c-a39c-a0e0f45c75a2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.199308 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.247540 4828 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/398a387b-e59c-486c-a39c-a0e0f45c75a2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.816859 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" event={"ID":"398a387b-e59c-486c-a39c-a0e0f45c75a2","Type":"ContainerDied","Data":"27fd6da24184231f0d7db3f5cfdc14a01e16a6fe49e10a9e67404749ff52bcd6"} Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.816926 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" Dec 05 19:24:25 crc kubenswrapper[4828]: E1205 19:24:25.818970 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-9gl2x" podUID="8d22bcf5-bf39-4595-8742-5d8c3018e7bf" Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.872112 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-rf9j4"] Dec 05 19:24:25 crc kubenswrapper[4828]: I1205 19:24:25.878562 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-rf9j4"] Dec 05 19:24:26 crc kubenswrapper[4828]: I1205 19:24:26.460414 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="097fc905-c9d4-49b9-a454-2aaa1b8ad22f" path="/var/lib/kubelet/pods/097fc905-c9d4-49b9-a454-2aaa1b8ad22f/volumes" Dec 05 19:24:26 crc kubenswrapper[4828]: I1205 19:24:26.460972 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="398a387b-e59c-486c-a39c-a0e0f45c75a2" path="/var/lib/kubelet/pods/398a387b-e59c-486c-a39c-a0e0f45c75a2/volumes" Dec 05 19:24:26 crc kubenswrapper[4828]: I1205 19:24:26.461640 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cf7f90d-1042-41f7-b072-57794b005f3d" path="/var/lib/kubelet/pods/8cf7f90d-1042-41f7-b072-57794b005f3d/volumes" Dec 05 19:24:28 crc kubenswrapper[4828]: I1205 19:24:28.502984 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-rf9j4" podUID="398a387b-e59c-486c-a39c-a0e0f45c75a2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: i/o timeout" Dec 05 19:24:29 crc kubenswrapper[4828]: I1205 19:24:29.324672 4828 scope.go:117] "RemoveContainer" containerID="538977d0b7d9845f46256cecc5720933660f4b760c76688a4b1246a27ff6ba0d" Dec 05 19:24:29 crc kubenswrapper[4828]: E1205 19:24:29.333629 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 05 19:24:29 crc kubenswrapper[4828]: E1205 19:24:29.333786 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mjw66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-7gmm7_openstack(ffc75dac-d7b0-41ce-ac4d-94f251036f95): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 19:24:29 crc kubenswrapper[4828]: E1205 19:24:29.335210 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-7gmm7" podUID="ffc75dac-d7b0-41ce-ac4d-94f251036f95" Dec 05 19:24:29 crc kubenswrapper[4828]: I1205 19:24:29.856322 4828 scope.go:117] "RemoveContainer" containerID="2b7e37af53ad45e1d5d49b4c13489828e428a80aa190ebc46de259368bbf9e01" Dec 05 19:24:29 crc kubenswrapper[4828]: E1205 19:24:29.877130 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-7gmm7" podUID="ffc75dac-d7b0-41ce-ac4d-94f251036f95" Dec 05 19:24:29 crc kubenswrapper[4828]: I1205 19:24:29.899686 4828 scope.go:117] "RemoveContainer" containerID="b17708d34872bb0deb8095e27e925a5e14b1b238a131014e69b851fcd20bc402" Dec 05 19:24:29 crc kubenswrapper[4828]: I1205 19:24:29.903582 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-594b9fb44-r9zh6"] Dec 05 19:24:29 crc kubenswrapper[4828]: W1205 19:24:29.929715 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99c01665_feb9_49f7_a97a_b6e6d87dc991.slice/crio-8b172fac74dbc5c7f29038fe24b59a5edac0be45d312d651a95b2edc692c83dc WatchSource:0}: Error finding container 8b172fac74dbc5c7f29038fe24b59a5edac0be45d312d651a95b2edc692c83dc: Status 404 returned error can't find the container with id 8b172fac74dbc5c7f29038fe24b59a5edac0be45d312d651a95b2edc692c83dc Dec 05 19:24:29 crc kubenswrapper[4828]: I1205 19:24:29.965133 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-v6w4g"] Dec 05 19:24:29 crc kubenswrapper[4828]: I1205 19:24:29.974848 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-699b69c564-442lb"] Dec 05 19:24:29 crc kubenswrapper[4828]: I1205 19:24:29.981100 4828 scope.go:117] "RemoveContainer" containerID="37bb34b2e7a9f1094c4687bc1df6e79504acda85f6285808deea147aabb59295" Dec 05 19:24:30 crc kubenswrapper[4828]: I1205 19:24:30.054383 4828 scope.go:117] "RemoveContainer" containerID="8579c88a9420a6586f124c7f2f06c91c3d09e47fdc646e3ff0320f073edd0dc9" Dec 05 19:24:30 crc kubenswrapper[4828]: W1205 19:24:30.068908 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08bf11d8_3591_4229_be9a_ce8b8e709739.slice/crio-fa034736667a89808f006d89345d3c100b14a9de2cf14a5270fb7c083ff6f6dc WatchSource:0}: Error finding container fa034736667a89808f006d89345d3c100b14a9de2cf14a5270fb7c083ff6f6dc: Status 404 returned error can't find the container with id fa034736667a89808f006d89345d3c100b14a9de2cf14a5270fb7c083ff6f6dc Dec 05 19:24:30 crc kubenswrapper[4828]: I1205 19:24:30.133953 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 19:24:30 crc kubenswrapper[4828]: W1205 19:24:30.167500 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59e69475_93aa_4875_8997_7cfa85de4b75.slice/crio-1a74db463782e10887dbe348094f3127de5388a3531fcaa0d5b2a9fdd72b0d51 WatchSource:0}: Error finding container 1a74db463782e10887dbe348094f3127de5388a3531fcaa0d5b2a9fdd72b0d51: Status 404 returned error can't find the container with id 1a74db463782e10887dbe348094f3127de5388a3531fcaa0d5b2a9fdd72b0d51 Dec 05 19:24:30 crc kubenswrapper[4828]: I1205 19:24:30.419307 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 19:24:30 crc kubenswrapper[4828]: I1205 19:24:30.898688 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-594b9fb44-r9zh6" event={"ID":"99c01665-feb9-49f7-a97a-b6e6d87dc991","Type":"ContainerStarted","Data":"198f916234e7a5de60bad943004477a66c36dbb31ea1be25815ff84b20d59d07"} Dec 05 19:24:30 crc kubenswrapper[4828]: I1205 19:24:30.899143 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-594b9fb44-r9zh6" event={"ID":"99c01665-feb9-49f7-a97a-b6e6d87dc991","Type":"ContainerStarted","Data":"c1e1f089b5593ffad6e0addbdb5a01478d8a2ce5b3a09f8a27ac22a11d903a88"} Dec 05 19:24:30 crc kubenswrapper[4828]: I1205 19:24:30.899156 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-594b9fb44-r9zh6" event={"ID":"99c01665-feb9-49f7-a97a-b6e6d87dc991","Type":"ContainerStarted","Data":"8b172fac74dbc5c7f29038fe24b59a5edac0be45d312d651a95b2edc692c83dc"} Dec 05 19:24:30 crc kubenswrapper[4828]: I1205 19:24:30.901357 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"59e69475-93aa-4875-8997-7cfa85de4b75","Type":"ContainerStarted","Data":"1a74db463782e10887dbe348094f3127de5388a3531fcaa0d5b2a9fdd72b0d51"} Dec 05 19:24:30 crc kubenswrapper[4828]: I1205 19:24:30.907084 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-699b69c564-442lb" event={"ID":"74df4612-463b-4b3c-8f2d-7dbb9494d6fe","Type":"ContainerStarted","Data":"fa40e13fba0df0483b771e91be21b66eca689a8961b6c27a37b0b215fbdc56b9"} Dec 05 19:24:30 crc kubenswrapper[4828]: I1205 19:24:30.907130 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-699b69c564-442lb" event={"ID":"74df4612-463b-4b3c-8f2d-7dbb9494d6fe","Type":"ContainerStarted","Data":"389a2d18b31e186a7ee496a4e107afa7e6779b1f1e859afe1edb6a6e9265ec50"} Dec 05 19:24:30 crc kubenswrapper[4828]: I1205 19:24:30.907145 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-699b69c564-442lb" event={"ID":"74df4612-463b-4b3c-8f2d-7dbb9494d6fe","Type":"ContainerStarted","Data":"99f2205c2e3f2b58388c4dbd464dddffe0015265cc6fc396bf3e7bae5835640a"} Dec 05 19:24:30 crc kubenswrapper[4828]: I1205 19:24:30.909531 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72755327-9414-46f2-b3ed-d19120b5876e","Type":"ContainerStarted","Data":"057563d0cf38b61d7d6dd495da778af65b865ebbbe15dd738843b002841e01ea"} Dec 05 19:24:30 crc kubenswrapper[4828]: I1205 19:24:30.910740 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v6w4g" event={"ID":"08bf11d8-3591-4229-be9a-ce8b8e709739","Type":"ContainerStarted","Data":"4a6ad32a31cc37eca2bf98c701150f8edfb71b9a920b5dc83017a43f65114bfd"} Dec 05 19:24:30 crc kubenswrapper[4828]: I1205 19:24:30.910765 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v6w4g" event={"ID":"08bf11d8-3591-4229-be9a-ce8b8e709739","Type":"ContainerStarted","Data":"fa034736667a89808f006d89345d3c100b14a9de2cf14a5270fb7c083ff6f6dc"} Dec 05 19:24:30 crc kubenswrapper[4828]: I1205 19:24:30.927294 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-594b9fb44-r9zh6" podStartSLOduration=25.927276399 podStartE2EDuration="25.927276399s" podCreationTimestamp="2025-12-05 19:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:24:30.921985217 +0000 UTC m=+1248.817207523" watchObservedRunningTime="2025-12-05 19:24:30.927276399 +0000 UTC m=+1248.822498695" Dec 05 19:24:30 crc kubenswrapper[4828]: I1205 19:24:30.928757 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77b4ccd85-stwwx" event={"ID":"b940b754-ad6e-454e-ab2a-242b1b63b344","Type":"ContainerStarted","Data":"3b379beb33c706eabca98355de0270dc4608cfe637f7bc3d01b527642705ce11"} Dec 05 19:24:30 crc kubenswrapper[4828]: I1205 19:24:30.928799 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77b4ccd85-stwwx" event={"ID":"b940b754-ad6e-454e-ab2a-242b1b63b344","Type":"ContainerStarted","Data":"462f8ae3c0237519374ae74ba74dab416c097a37c47b9880fa6c54a792953107"} Dec 05 19:24:30 crc kubenswrapper[4828]: I1205 19:24:30.928851 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-77b4ccd85-stwwx" podUID="b940b754-ad6e-454e-ab2a-242b1b63b344" containerName="horizon-log" containerID="cri-o://462f8ae3c0237519374ae74ba74dab416c097a37c47b9880fa6c54a792953107" gracePeriod=30 Dec 05 19:24:30 crc kubenswrapper[4828]: I1205 19:24:30.928915 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-77b4ccd85-stwwx" podUID="b940b754-ad6e-454e-ab2a-242b1b63b344" containerName="horizon" containerID="cri-o://3b379beb33c706eabca98355de0270dc4608cfe637f7bc3d01b527642705ce11" gracePeriod=30 Dec 05 19:24:30 crc kubenswrapper[4828]: I1205 19:24:30.931760 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-m6tvf" event={"ID":"bcb21ca0-c48a-4c70-bb4f-fe1f240b3101","Type":"ContainerStarted","Data":"e4a6aa4accfa85b5a1aef88095642407a93bea0c712eb77b6c4a5305580c32bb"} Dec 05 19:24:30 crc kubenswrapper[4828]: I1205 19:24:30.941047 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b578dfb2-8a7f-420d-a503-d2eac607b648","Type":"ContainerStarted","Data":"195942677f11a9cc63c0d603d8e517b51e29cc1cd39b3ed94458f07896d71852"} Dec 05 19:24:30 crc kubenswrapper[4828]: I1205 19:24:30.948551 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b878f489-x2jpv" event={"ID":"ada41f83-4947-4d14-a1c1-c1dd44f7d656","Type":"ContainerStarted","Data":"b8c46648beff69ab8a2e0648f2c05a4aadfdf97eccb9fc3a83ed3db5a2a29f57"} Dec 05 19:24:30 crc kubenswrapper[4828]: I1205 19:24:30.948592 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b878f489-x2jpv" event={"ID":"ada41f83-4947-4d14-a1c1-c1dd44f7d656","Type":"ContainerStarted","Data":"d366579dcb0151d659d11ed38464a9aee419b4edeadc63d7760ecb1ec70c9f80"} Dec 05 19:24:30 crc kubenswrapper[4828]: I1205 19:24:30.948712 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-b878f489-x2jpv" podUID="ada41f83-4947-4d14-a1c1-c1dd44f7d656" containerName="horizon-log" containerID="cri-o://d366579dcb0151d659d11ed38464a9aee419b4edeadc63d7760ecb1ec70c9f80" gracePeriod=30 Dec 05 19:24:30 crc kubenswrapper[4828]: I1205 19:24:30.948792 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-b878f489-x2jpv" podUID="ada41f83-4947-4d14-a1c1-c1dd44f7d656" containerName="horizon" containerID="cri-o://b8c46648beff69ab8a2e0648f2c05a4aadfdf97eccb9fc3a83ed3db5a2a29f57" gracePeriod=30 Dec 05 19:24:30 crc kubenswrapper[4828]: I1205 19:24:30.963006 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-699b69c564-442lb" podStartSLOduration=25.962986427 podStartE2EDuration="25.962986427s" podCreationTimestamp="2025-12-05 19:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:24:30.941592918 +0000 UTC m=+1248.836815224" watchObservedRunningTime="2025-12-05 19:24:30.962986427 +0000 UTC m=+1248.858208733" Dec 05 19:24:30 crc kubenswrapper[4828]: I1205 19:24:30.969500 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-v6w4g" podStartSLOduration=19.969482733 podStartE2EDuration="19.969482733s" podCreationTimestamp="2025-12-05 19:24:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:24:30.957473348 +0000 UTC m=+1248.852695654" watchObservedRunningTime="2025-12-05 19:24:30.969482733 +0000 UTC m=+1248.864705039" Dec 05 19:24:30 crc kubenswrapper[4828]: I1205 19:24:30.978494 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-m6tvf" podStartSLOduration=7.941086552 podStartE2EDuration="34.978478507s" podCreationTimestamp="2025-12-05 19:23:56 +0000 UTC" firstStartedPulling="2025-12-05 19:23:57.852770219 +0000 UTC m=+1215.747992525" lastFinishedPulling="2025-12-05 19:24:24.890162174 +0000 UTC m=+1242.785384480" observedRunningTime="2025-12-05 19:24:30.977256713 +0000 UTC m=+1248.872479019" watchObservedRunningTime="2025-12-05 19:24:30.978478507 +0000 UTC m=+1248.873700813" Dec 05 19:24:31 crc kubenswrapper[4828]: I1205 19:24:31.016239 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-77b4ccd85-stwwx" podStartSLOduration=8.465240063 podStartE2EDuration="36.016216069s" podCreationTimestamp="2025-12-05 19:23:55 +0000 UTC" firstStartedPulling="2025-12-05 19:23:57.347461542 +0000 UTC m=+1215.242683848" lastFinishedPulling="2025-12-05 19:24:24.898437548 +0000 UTC m=+1242.793659854" observedRunningTime="2025-12-05 19:24:31.015917111 +0000 UTC m=+1248.911139417" watchObservedRunningTime="2025-12-05 19:24:31.016216069 +0000 UTC m=+1248.911438375" Dec 05 19:24:31 crc kubenswrapper[4828]: I1205 19:24:31.040496 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-b878f489-x2jpv" podStartSLOduration=2.981159161 podStartE2EDuration="33.040477976s" podCreationTimestamp="2025-12-05 19:23:58 +0000 UTC" firstStartedPulling="2025-12-05 19:23:59.756002532 +0000 UTC m=+1217.651224848" lastFinishedPulling="2025-12-05 19:24:29.815321337 +0000 UTC m=+1247.710543663" observedRunningTime="2025-12-05 19:24:31.035366518 +0000 UTC m=+1248.930588824" watchObservedRunningTime="2025-12-05 19:24:31.040477976 +0000 UTC m=+1248.935700282" Dec 05 19:24:31 crc kubenswrapper[4828]: I1205 19:24:31.963200 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b578dfb2-8a7f-420d-a503-d2eac607b648","Type":"ContainerStarted","Data":"fad55517291a1a37c02bfb4fd854ff8de7c34131aa6f5455c37593aced627250"} Dec 05 19:24:31 crc kubenswrapper[4828]: I1205 19:24:31.967663 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"59e69475-93aa-4875-8997-7cfa85de4b75","Type":"ContainerStarted","Data":"86547ad61d25d2149d2bbdfa2485b1b8ec2cd2fd2c5897bf04dd073b9e4c7059"} Dec 05 19:24:31 crc kubenswrapper[4828]: I1205 19:24:31.967709 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"59e69475-93aa-4875-8997-7cfa85de4b75","Type":"ContainerStarted","Data":"439b8fa3d319ffe2baabb3df6f255b4d1f96aa584e83fdf10c89904eba3fbd98"} Dec 05 19:24:32 crc kubenswrapper[4828]: I1205 19:24:32.003927 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=16.003902993 podStartE2EDuration="16.003902993s" podCreationTimestamp="2025-12-05 19:24:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:24:31.996314818 +0000 UTC m=+1249.891537124" watchObservedRunningTime="2025-12-05 19:24:32.003902993 +0000 UTC m=+1249.899125309" Dec 05 19:24:33 crc kubenswrapper[4828]: I1205 19:24:33.008206 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b578dfb2-8a7f-420d-a503-d2eac607b648","Type":"ContainerStarted","Data":"892f67ecc83c88204a5b994a1f12ae16e1385d91669efa57e70986ffceaa6285"} Dec 05 19:24:33 crc kubenswrapper[4828]: I1205 19:24:33.013169 4828 generic.go:334] "Generic (PLEG): container finished" podID="614bd6cb-e60d-4c28-9e7e-132ab3040deb" containerID="3f2c6eed6861458f9e6b32c41a9d0da2164f1f361c21514c5879a2593e92fe76" exitCode=0 Dec 05 19:24:33 crc kubenswrapper[4828]: I1205 19:24:33.013811 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mdbj4" event={"ID":"614bd6cb-e60d-4c28-9e7e-132ab3040deb","Type":"ContainerDied","Data":"3f2c6eed6861458f9e6b32c41a9d0da2164f1f361c21514c5879a2593e92fe76"} Dec 05 19:24:33 crc kubenswrapper[4828]: I1205 19:24:33.064912 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.064885123 podStartE2EDuration="9.064885123s" podCreationTimestamp="2025-12-05 19:24:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:24:33.056166267 +0000 UTC m=+1250.951388563" watchObservedRunningTime="2025-12-05 19:24:33.064885123 +0000 UTC m=+1250.960107439" Dec 05 19:24:34 crc kubenswrapper[4828]: I1205 19:24:34.027200 4828 generic.go:334] "Generic (PLEG): container finished" podID="bcb21ca0-c48a-4c70-bb4f-fe1f240b3101" containerID="e4a6aa4accfa85b5a1aef88095642407a93bea0c712eb77b6c4a5305580c32bb" exitCode=0 Dec 05 19:24:34 crc kubenswrapper[4828]: I1205 19:24:34.027345 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-m6tvf" event={"ID":"bcb21ca0-c48a-4c70-bb4f-fe1f240b3101","Type":"ContainerDied","Data":"e4a6aa4accfa85b5a1aef88095642407a93bea0c712eb77b6c4a5305580c32bb"} Dec 05 19:24:34 crc kubenswrapper[4828]: I1205 19:24:34.033948 4828 generic.go:334] "Generic (PLEG): container finished" podID="08bf11d8-3591-4229-be9a-ce8b8e709739" containerID="4a6ad32a31cc37eca2bf98c701150f8edfb71b9a920b5dc83017a43f65114bfd" exitCode=0 Dec 05 19:24:34 crc kubenswrapper[4828]: I1205 19:24:34.033988 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v6w4g" event={"ID":"08bf11d8-3591-4229-be9a-ce8b8e709739","Type":"ContainerDied","Data":"4a6ad32a31cc37eca2bf98c701150f8edfb71b9a920b5dc83017a43f65114bfd"} Dec 05 19:24:35 crc kubenswrapper[4828]: I1205 19:24:35.202153 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 05 19:24:35 crc kubenswrapper[4828]: I1205 19:24:35.203228 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 05 19:24:35 crc kubenswrapper[4828]: I1205 19:24:35.251039 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 05 19:24:35 crc kubenswrapper[4828]: I1205 19:24:35.301388 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.050510 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mdbj4" event={"ID":"614bd6cb-e60d-4c28-9e7e-132ab3040deb","Type":"ContainerDied","Data":"018a86f13fe71d060ea547b01ac6ebaf64427fe5d419868042fabe717c37012c"} Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.050771 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="018a86f13fe71d060ea547b01ac6ebaf64427fe5d419868042fabe717c37012c" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.052367 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-m6tvf" event={"ID":"bcb21ca0-c48a-4c70-bb4f-fe1f240b3101","Type":"ContainerDied","Data":"614a7b4e50b01e5c8c8fe10267ee131cb6c2de6b63490a6113716f9598327834"} Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.052389 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="614a7b4e50b01e5c8c8fe10267ee131cb6c2de6b63490a6113716f9598327834" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.054880 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v6w4g" event={"ID":"08bf11d8-3591-4229-be9a-ce8b8e709739","Type":"ContainerDied","Data":"fa034736667a89808f006d89345d3c100b14a9de2cf14a5270fb7c083ff6f6dc"} Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.054923 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa034736667a89808f006d89345d3c100b14a9de2cf14a5270fb7c083ff6f6dc" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.054952 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.054964 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.133349 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v6w4g" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.133796 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-m6tvf" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.143555 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mdbj4" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.150885 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-699b69c564-442lb" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.150924 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-699b69c564-442lb" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.239377 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08bf11d8-3591-4229-be9a-ce8b8e709739-combined-ca-bundle\") pod \"08bf11d8-3591-4229-be9a-ce8b8e709739\" (UID: \"08bf11d8-3591-4229-be9a-ce8b8e709739\") " Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.239427 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/08bf11d8-3591-4229-be9a-ce8b8e709739-credential-keys\") pod \"08bf11d8-3591-4229-be9a-ce8b8e709739\" (UID: \"08bf11d8-3591-4229-be9a-ce8b8e709739\") " Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.239480 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pjmw\" (UniqueName: \"kubernetes.io/projected/08bf11d8-3591-4229-be9a-ce8b8e709739-kube-api-access-5pjmw\") pod \"08bf11d8-3591-4229-be9a-ce8b8e709739\" (UID: \"08bf11d8-3591-4229-be9a-ce8b8e709739\") " Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.239500 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101-scripts\") pod \"bcb21ca0-c48a-4c70-bb4f-fe1f240b3101\" (UID: \"bcb21ca0-c48a-4c70-bb4f-fe1f240b3101\") " Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.239527 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101-config-data\") pod \"bcb21ca0-c48a-4c70-bb4f-fe1f240b3101\" (UID: \"bcb21ca0-c48a-4c70-bb4f-fe1f240b3101\") " Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.239564 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101-combined-ca-bundle\") pod \"bcb21ca0-c48a-4c70-bb4f-fe1f240b3101\" (UID: \"bcb21ca0-c48a-4c70-bb4f-fe1f240b3101\") " Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.239592 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/08bf11d8-3591-4229-be9a-ce8b8e709739-fernet-keys\") pod \"08bf11d8-3591-4229-be9a-ce8b8e709739\" (UID: \"08bf11d8-3591-4229-be9a-ce8b8e709739\") " Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.239615 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7gkz\" (UniqueName: \"kubernetes.io/projected/614bd6cb-e60d-4c28-9e7e-132ab3040deb-kube-api-access-k7gkz\") pod \"614bd6cb-e60d-4c28-9e7e-132ab3040deb\" (UID: \"614bd6cb-e60d-4c28-9e7e-132ab3040deb\") " Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.239643 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/614bd6cb-e60d-4c28-9e7e-132ab3040deb-combined-ca-bundle\") pod \"614bd6cb-e60d-4c28-9e7e-132ab3040deb\" (UID: \"614bd6cb-e60d-4c28-9e7e-132ab3040deb\") " Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.239711 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08bf11d8-3591-4229-be9a-ce8b8e709739-config-data\") pod \"08bf11d8-3591-4229-be9a-ce8b8e709739\" (UID: \"08bf11d8-3591-4229-be9a-ce8b8e709739\") " Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.239738 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/614bd6cb-e60d-4c28-9e7e-132ab3040deb-config\") pod \"614bd6cb-e60d-4c28-9e7e-132ab3040deb\" (UID: \"614bd6cb-e60d-4c28-9e7e-132ab3040deb\") " Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.239767 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101-logs\") pod \"bcb21ca0-c48a-4c70-bb4f-fe1f240b3101\" (UID: \"bcb21ca0-c48a-4c70-bb4f-fe1f240b3101\") " Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.239782 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08bf11d8-3591-4229-be9a-ce8b8e709739-scripts\") pod \"08bf11d8-3591-4229-be9a-ce8b8e709739\" (UID: \"08bf11d8-3591-4229-be9a-ce8b8e709739\") " Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.239802 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlvmp\" (UniqueName: \"kubernetes.io/projected/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101-kube-api-access-jlvmp\") pod \"bcb21ca0-c48a-4c70-bb4f-fe1f240b3101\" (UID: \"bcb21ca0-c48a-4c70-bb4f-fe1f240b3101\") " Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.245448 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08bf11d8-3591-4229-be9a-ce8b8e709739-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "08bf11d8-3591-4229-be9a-ce8b8e709739" (UID: "08bf11d8-3591-4229-be9a-ce8b8e709739"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.248120 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101-logs" (OuterVolumeSpecName: "logs") pod "bcb21ca0-c48a-4c70-bb4f-fe1f240b3101" (UID: "bcb21ca0-c48a-4c70-bb4f-fe1f240b3101"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.248672 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08bf11d8-3591-4229-be9a-ce8b8e709739-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "08bf11d8-3591-4229-be9a-ce8b8e709739" (UID: "08bf11d8-3591-4229-be9a-ce8b8e709739"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.252999 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-594b9fb44-r9zh6" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.254064 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-594b9fb44-r9zh6" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.254716 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101-scripts" (OuterVolumeSpecName: "scripts") pod "bcb21ca0-c48a-4c70-bb4f-fe1f240b3101" (UID: "bcb21ca0-c48a-4c70-bb4f-fe1f240b3101"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.256789 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08bf11d8-3591-4229-be9a-ce8b8e709739-scripts" (OuterVolumeSpecName: "scripts") pod "08bf11d8-3591-4229-be9a-ce8b8e709739" (UID: "08bf11d8-3591-4229-be9a-ce8b8e709739"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.284161 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101-kube-api-access-jlvmp" (OuterVolumeSpecName: "kube-api-access-jlvmp") pod "bcb21ca0-c48a-4c70-bb4f-fe1f240b3101" (UID: "bcb21ca0-c48a-4c70-bb4f-fe1f240b3101"). InnerVolumeSpecName "kube-api-access-jlvmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.284946 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08bf11d8-3591-4229-be9a-ce8b8e709739-kube-api-access-5pjmw" (OuterVolumeSpecName: "kube-api-access-5pjmw") pod "08bf11d8-3591-4229-be9a-ce8b8e709739" (UID: "08bf11d8-3591-4229-be9a-ce8b8e709739"). InnerVolumeSpecName "kube-api-access-5pjmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.285054 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/614bd6cb-e60d-4c28-9e7e-132ab3040deb-kube-api-access-k7gkz" (OuterVolumeSpecName: "kube-api-access-k7gkz") pod "614bd6cb-e60d-4c28-9e7e-132ab3040deb" (UID: "614bd6cb-e60d-4c28-9e7e-132ab3040deb"). InnerVolumeSpecName "kube-api-access-k7gkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.344408 4828 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101-logs\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.344443 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08bf11d8-3591-4229-be9a-ce8b8e709739-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.344457 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlvmp\" (UniqueName: \"kubernetes.io/projected/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101-kube-api-access-jlvmp\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.344470 4828 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/08bf11d8-3591-4229-be9a-ce8b8e709739-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.344486 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pjmw\" (UniqueName: \"kubernetes.io/projected/08bf11d8-3591-4229-be9a-ce8b8e709739-kube-api-access-5pjmw\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.344497 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.344508 4828 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/08bf11d8-3591-4229-be9a-ce8b8e709739-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.344519 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7gkz\" (UniqueName: \"kubernetes.io/projected/614bd6cb-e60d-4c28-9e7e-132ab3040deb-kube-api-access-k7gkz\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.353161 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08bf11d8-3591-4229-be9a-ce8b8e709739-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08bf11d8-3591-4229-be9a-ce8b8e709739" (UID: "08bf11d8-3591-4229-be9a-ce8b8e709739"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.357247 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101-config-data" (OuterVolumeSpecName: "config-data") pod "bcb21ca0-c48a-4c70-bb4f-fe1f240b3101" (UID: "bcb21ca0-c48a-4c70-bb4f-fe1f240b3101"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.388403 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bcb21ca0-c48a-4c70-bb4f-fe1f240b3101" (UID: "bcb21ca0-c48a-4c70-bb4f-fe1f240b3101"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.390974 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/614bd6cb-e60d-4c28-9e7e-132ab3040deb-config" (OuterVolumeSpecName: "config") pod "614bd6cb-e60d-4c28-9e7e-132ab3040deb" (UID: "614bd6cb-e60d-4c28-9e7e-132ab3040deb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.407200 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08bf11d8-3591-4229-be9a-ce8b8e709739-config-data" (OuterVolumeSpecName: "config-data") pod "08bf11d8-3591-4229-be9a-ce8b8e709739" (UID: "08bf11d8-3591-4229-be9a-ce8b8e709739"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.417359 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/614bd6cb-e60d-4c28-9e7e-132ab3040deb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "614bd6cb-e60d-4c28-9e7e-132ab3040deb" (UID: "614bd6cb-e60d-4c28-9e7e-132ab3040deb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.427936 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-77b4ccd85-stwwx" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.450046 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08bf11d8-3591-4229-be9a-ce8b8e709739-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.450077 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.450086 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.450094 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/614bd6cb-e60d-4c28-9e7e-132ab3040deb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.450102 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08bf11d8-3591-4229-be9a-ce8b8e709739-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:36 crc kubenswrapper[4828]: I1205 19:24:36.450110 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/614bd6cb-e60d-4c28-9e7e-132ab3040deb-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.068173 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-m6tvf" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.068211 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v6w4g" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.068359 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mdbj4" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.169082 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.169134 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.268402 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-89dcf679f-97rfx"] Dec 05 19:24:37 crc kubenswrapper[4828]: E1205 19:24:37.268787 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="614bd6cb-e60d-4c28-9e7e-132ab3040deb" containerName="neutron-db-sync" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.268799 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="614bd6cb-e60d-4c28-9e7e-132ab3040deb" containerName="neutron-db-sync" Dec 05 19:24:37 crc kubenswrapper[4828]: E1205 19:24:37.275060 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="398a387b-e59c-486c-a39c-a0e0f45c75a2" containerName="init" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.275104 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="398a387b-e59c-486c-a39c-a0e0f45c75a2" containerName="init" Dec 05 19:24:37 crc kubenswrapper[4828]: E1205 19:24:37.275119 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb21ca0-c48a-4c70-bb4f-fe1f240b3101" containerName="placement-db-sync" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.275126 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb21ca0-c48a-4c70-bb4f-fe1f240b3101" containerName="placement-db-sync" Dec 05 19:24:37 crc kubenswrapper[4828]: E1205 19:24:37.275137 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="398a387b-e59c-486c-a39c-a0e0f45c75a2" containerName="dnsmasq-dns" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.275143 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="398a387b-e59c-486c-a39c-a0e0f45c75a2" containerName="dnsmasq-dns" Dec 05 19:24:37 crc kubenswrapper[4828]: E1205 19:24:37.275152 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08bf11d8-3591-4229-be9a-ce8b8e709739" containerName="keystone-bootstrap" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.275158 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="08bf11d8-3591-4229-be9a-ce8b8e709739" containerName="keystone-bootstrap" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.275468 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="08bf11d8-3591-4229-be9a-ce8b8e709739" containerName="keystone-bootstrap" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.275482 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="398a387b-e59c-486c-a39c-a0e0f45c75a2" containerName="dnsmasq-dns" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.275490 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="614bd6cb-e60d-4c28-9e7e-132ab3040deb" containerName="neutron-db-sync" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.275512 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcb21ca0-c48a-4c70-bb4f-fe1f240b3101" containerName="placement-db-sync" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.276085 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-89dcf679f-97rfx" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.279494 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7rdh4" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.279951 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.280131 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.280285 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.280584 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.280731 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.284845 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.299463 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-89dcf679f-97rfx"] Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.314076 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.365374 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4-config-data\") pod \"keystone-89dcf679f-97rfx\" (UID: \"49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4\") " pod="openstack/keystone-89dcf679f-97rfx" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.365456 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4-scripts\") pod \"keystone-89dcf679f-97rfx\" (UID: \"49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4\") " pod="openstack/keystone-89dcf679f-97rfx" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.365504 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4-public-tls-certs\") pod \"keystone-89dcf679f-97rfx\" (UID: \"49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4\") " pod="openstack/keystone-89dcf679f-97rfx" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.365544 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4-credential-keys\") pod \"keystone-89dcf679f-97rfx\" (UID: \"49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4\") " pod="openstack/keystone-89dcf679f-97rfx" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.365573 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4-fernet-keys\") pod \"keystone-89dcf679f-97rfx\" (UID: \"49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4\") " pod="openstack/keystone-89dcf679f-97rfx" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.365607 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4-internal-tls-certs\") pod \"keystone-89dcf679f-97rfx\" (UID: \"49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4\") " pod="openstack/keystone-89dcf679f-97rfx" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.365625 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4-combined-ca-bundle\") pod \"keystone-89dcf679f-97rfx\" (UID: \"49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4\") " pod="openstack/keystone-89dcf679f-97rfx" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.365656 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxns4\" (UniqueName: \"kubernetes.io/projected/49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4-kube-api-access-mxns4\") pod \"keystone-89dcf679f-97rfx\" (UID: \"49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4\") " pod="openstack/keystone-89dcf679f-97rfx" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.433752 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6bfcc469f6-vtpj6"] Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.436089 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6bfcc469f6-vtpj6" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.441016 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.441538 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.442131 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.442392 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jk4pw" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.442650 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.455491 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6bfcc469f6-vtpj6"] Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.466783 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4-scripts\") pod \"keystone-89dcf679f-97rfx\" (UID: \"49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4\") " pod="openstack/keystone-89dcf679f-97rfx" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.466865 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4-public-tls-certs\") pod \"keystone-89dcf679f-97rfx\" (UID: \"49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4\") " pod="openstack/keystone-89dcf679f-97rfx" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.466906 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4-credential-keys\") pod \"keystone-89dcf679f-97rfx\" (UID: \"49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4\") " pod="openstack/keystone-89dcf679f-97rfx" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.466937 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4-fernet-keys\") pod \"keystone-89dcf679f-97rfx\" (UID: \"49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4\") " pod="openstack/keystone-89dcf679f-97rfx" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.466977 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4-internal-tls-certs\") pod \"keystone-89dcf679f-97rfx\" (UID: \"49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4\") " pod="openstack/keystone-89dcf679f-97rfx" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.466998 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4-combined-ca-bundle\") pod \"keystone-89dcf679f-97rfx\" (UID: \"49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4\") " pod="openstack/keystone-89dcf679f-97rfx" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.467030 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxns4\" (UniqueName: \"kubernetes.io/projected/49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4-kube-api-access-mxns4\") pod \"keystone-89dcf679f-97rfx\" (UID: \"49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4\") " pod="openstack/keystone-89dcf679f-97rfx" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.467066 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4-config-data\") pod \"keystone-89dcf679f-97rfx\" (UID: \"49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4\") " pod="openstack/keystone-89dcf679f-97rfx" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.473022 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4-config-data\") pod \"keystone-89dcf679f-97rfx\" (UID: \"49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4\") " pod="openstack/keystone-89dcf679f-97rfx" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.481363 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4-public-tls-certs\") pod \"keystone-89dcf679f-97rfx\" (UID: \"49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4\") " pod="openstack/keystone-89dcf679f-97rfx" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.483812 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4-combined-ca-bundle\") pod \"keystone-89dcf679f-97rfx\" (UID: \"49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4\") " pod="openstack/keystone-89dcf679f-97rfx" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.488289 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4-scripts\") pod \"keystone-89dcf679f-97rfx\" (UID: \"49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4\") " pod="openstack/keystone-89dcf679f-97rfx" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.492330 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4-fernet-keys\") pod \"keystone-89dcf679f-97rfx\" (UID: \"49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4\") " pod="openstack/keystone-89dcf679f-97rfx" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.495670 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4-credential-keys\") pod \"keystone-89dcf679f-97rfx\" (UID: \"49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4\") " pod="openstack/keystone-89dcf679f-97rfx" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.503706 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4-internal-tls-certs\") pod \"keystone-89dcf679f-97rfx\" (UID: \"49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4\") " pod="openstack/keystone-89dcf679f-97rfx" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.546691 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxns4\" (UniqueName: \"kubernetes.io/projected/49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4-kube-api-access-mxns4\") pod \"keystone-89dcf679f-97rfx\" (UID: \"49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4\") " pod="openstack/keystone-89dcf679f-97rfx" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.568728 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9e67cf9-61e2-43a1-867a-a8f97ada16a4-logs\") pod \"placement-6bfcc469f6-vtpj6\" (UID: \"a9e67cf9-61e2-43a1-867a-a8f97ada16a4\") " pod="openstack/placement-6bfcc469f6-vtpj6" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.568875 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9e67cf9-61e2-43a1-867a-a8f97ada16a4-internal-tls-certs\") pod \"placement-6bfcc469f6-vtpj6\" (UID: \"a9e67cf9-61e2-43a1-867a-a8f97ada16a4\") " pod="openstack/placement-6bfcc469f6-vtpj6" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.568897 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9e67cf9-61e2-43a1-867a-a8f97ada16a4-public-tls-certs\") pod \"placement-6bfcc469f6-vtpj6\" (UID: \"a9e67cf9-61e2-43a1-867a-a8f97ada16a4\") " pod="openstack/placement-6bfcc469f6-vtpj6" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.569002 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9e67cf9-61e2-43a1-867a-a8f97ada16a4-config-data\") pod \"placement-6bfcc469f6-vtpj6\" (UID: \"a9e67cf9-61e2-43a1-867a-a8f97ada16a4\") " pod="openstack/placement-6bfcc469f6-vtpj6" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.569043 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9e67cf9-61e2-43a1-867a-a8f97ada16a4-scripts\") pod \"placement-6bfcc469f6-vtpj6\" (UID: \"a9e67cf9-61e2-43a1-867a-a8f97ada16a4\") " pod="openstack/placement-6bfcc469f6-vtpj6" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.569061 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9e67cf9-61e2-43a1-867a-a8f97ada16a4-combined-ca-bundle\") pod \"placement-6bfcc469f6-vtpj6\" (UID: \"a9e67cf9-61e2-43a1-867a-a8f97ada16a4\") " pod="openstack/placement-6bfcc469f6-vtpj6" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.569075 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2xcw\" (UniqueName: \"kubernetes.io/projected/a9e67cf9-61e2-43a1-867a-a8f97ada16a4-kube-api-access-n2xcw\") pod \"placement-6bfcc469f6-vtpj6\" (UID: \"a9e67cf9-61e2-43a1-867a-a8f97ada16a4\") " pod="openstack/placement-6bfcc469f6-vtpj6" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.643045 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-89dcf679f-97rfx" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.669021 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-xk8rt"] Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.670700 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9e67cf9-61e2-43a1-867a-a8f97ada16a4-config-data\") pod \"placement-6bfcc469f6-vtpj6\" (UID: \"a9e67cf9-61e2-43a1-867a-a8f97ada16a4\") " pod="openstack/placement-6bfcc469f6-vtpj6" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.670738 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9e67cf9-61e2-43a1-867a-a8f97ada16a4-combined-ca-bundle\") pod \"placement-6bfcc469f6-vtpj6\" (UID: \"a9e67cf9-61e2-43a1-867a-a8f97ada16a4\") " pod="openstack/placement-6bfcc469f6-vtpj6" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.670755 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9e67cf9-61e2-43a1-867a-a8f97ada16a4-scripts\") pod \"placement-6bfcc469f6-vtpj6\" (UID: \"a9e67cf9-61e2-43a1-867a-a8f97ada16a4\") " pod="openstack/placement-6bfcc469f6-vtpj6" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.670774 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2xcw\" (UniqueName: \"kubernetes.io/projected/a9e67cf9-61e2-43a1-867a-a8f97ada16a4-kube-api-access-n2xcw\") pod \"placement-6bfcc469f6-vtpj6\" (UID: \"a9e67cf9-61e2-43a1-867a-a8f97ada16a4\") " pod="openstack/placement-6bfcc469f6-vtpj6" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.670806 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9e67cf9-61e2-43a1-867a-a8f97ada16a4-logs\") pod \"placement-6bfcc469f6-vtpj6\" (UID: \"a9e67cf9-61e2-43a1-867a-a8f97ada16a4\") " pod="openstack/placement-6bfcc469f6-vtpj6" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.670894 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9e67cf9-61e2-43a1-867a-a8f97ada16a4-internal-tls-certs\") pod \"placement-6bfcc469f6-vtpj6\" (UID: \"a9e67cf9-61e2-43a1-867a-a8f97ada16a4\") " pod="openstack/placement-6bfcc469f6-vtpj6" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.670911 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9e67cf9-61e2-43a1-867a-a8f97ada16a4-public-tls-certs\") pod \"placement-6bfcc469f6-vtpj6\" (UID: \"a9e67cf9-61e2-43a1-867a-a8f97ada16a4\") " pod="openstack/placement-6bfcc469f6-vtpj6" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.672125 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.674222 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9e67cf9-61e2-43a1-867a-a8f97ada16a4-logs\") pod \"placement-6bfcc469f6-vtpj6\" (UID: \"a9e67cf9-61e2-43a1-867a-a8f97ada16a4\") " pod="openstack/placement-6bfcc469f6-vtpj6" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.682401 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9e67cf9-61e2-43a1-867a-a8f97ada16a4-public-tls-certs\") pod \"placement-6bfcc469f6-vtpj6\" (UID: \"a9e67cf9-61e2-43a1-867a-a8f97ada16a4\") " pod="openstack/placement-6bfcc469f6-vtpj6" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.682676 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9e67cf9-61e2-43a1-867a-a8f97ada16a4-scripts\") pod \"placement-6bfcc469f6-vtpj6\" (UID: \"a9e67cf9-61e2-43a1-867a-a8f97ada16a4\") " pod="openstack/placement-6bfcc469f6-vtpj6" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.688395 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9e67cf9-61e2-43a1-867a-a8f97ada16a4-internal-tls-certs\") pod \"placement-6bfcc469f6-vtpj6\" (UID: \"a9e67cf9-61e2-43a1-867a-a8f97ada16a4\") " pod="openstack/placement-6bfcc469f6-vtpj6" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.691120 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9e67cf9-61e2-43a1-867a-a8f97ada16a4-config-data\") pod \"placement-6bfcc469f6-vtpj6\" (UID: \"a9e67cf9-61e2-43a1-867a-a8f97ada16a4\") " pod="openstack/placement-6bfcc469f6-vtpj6" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.696361 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9e67cf9-61e2-43a1-867a-a8f97ada16a4-combined-ca-bundle\") pod \"placement-6bfcc469f6-vtpj6\" (UID: \"a9e67cf9-61e2-43a1-867a-a8f97ada16a4\") " pod="openstack/placement-6bfcc469f6-vtpj6" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.696411 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-596dd8d85b-r59nh"] Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.697868 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-596dd8d85b-r59nh" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.709473 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.709974 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-wsbsj" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.710126 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.710335 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.715664 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-xk8rt"] Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.723465 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2xcw\" (UniqueName: \"kubernetes.io/projected/a9e67cf9-61e2-43a1-867a-a8f97ada16a4-kube-api-access-n2xcw\") pod \"placement-6bfcc469f6-vtpj6\" (UID: \"a9e67cf9-61e2-43a1-867a-a8f97ada16a4\") " pod="openstack/placement-6bfcc469f6-vtpj6" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.736902 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-596dd8d85b-r59nh"] Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.757880 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6bfcc469f6-vtpj6" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.781750 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psxsh\" (UniqueName: \"kubernetes.io/projected/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-kube-api-access-psxsh\") pod \"dnsmasq-dns-55f844cf75-xk8rt\" (UID: \"e5ac966d-0aae-4f8f-a38b-2debce3a8e64\") " pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.781803 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-dns-svc\") pod \"dnsmasq-dns-55f844cf75-xk8rt\" (UID: \"e5ac966d-0aae-4f8f-a38b-2debce3a8e64\") " pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.781841 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a89115ab-f300-433f-934e-dce679bf1877-httpd-config\") pod \"neutron-596dd8d85b-r59nh\" (UID: \"a89115ab-f300-433f-934e-dce679bf1877\") " pod="openstack/neutron-596dd8d85b-r59nh" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.781863 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-config\") pod \"dnsmasq-dns-55f844cf75-xk8rt\" (UID: \"e5ac966d-0aae-4f8f-a38b-2debce3a8e64\") " pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.781878 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-xk8rt\" (UID: \"e5ac966d-0aae-4f8f-a38b-2debce3a8e64\") " pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.781894 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-xk8rt\" (UID: \"e5ac966d-0aae-4f8f-a38b-2debce3a8e64\") " pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.781916 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a89115ab-f300-433f-934e-dce679bf1877-combined-ca-bundle\") pod \"neutron-596dd8d85b-r59nh\" (UID: \"a89115ab-f300-433f-934e-dce679bf1877\") " pod="openstack/neutron-596dd8d85b-r59nh" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.781953 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a89115ab-f300-433f-934e-dce679bf1877-ovndb-tls-certs\") pod \"neutron-596dd8d85b-r59nh\" (UID: \"a89115ab-f300-433f-934e-dce679bf1877\") " pod="openstack/neutron-596dd8d85b-r59nh" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.781995 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-xk8rt\" (UID: \"e5ac966d-0aae-4f8f-a38b-2debce3a8e64\") " pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.782018 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a89115ab-f300-433f-934e-dce679bf1877-config\") pod \"neutron-596dd8d85b-r59nh\" (UID: \"a89115ab-f300-433f-934e-dce679bf1877\") " pod="openstack/neutron-596dd8d85b-r59nh" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.782037 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28mbx\" (UniqueName: \"kubernetes.io/projected/a89115ab-f300-433f-934e-dce679bf1877-kube-api-access-28mbx\") pod \"neutron-596dd8d85b-r59nh\" (UID: \"a89115ab-f300-433f-934e-dce679bf1877\") " pod="openstack/neutron-596dd8d85b-r59nh" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.883291 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psxsh\" (UniqueName: \"kubernetes.io/projected/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-kube-api-access-psxsh\") pod \"dnsmasq-dns-55f844cf75-xk8rt\" (UID: \"e5ac966d-0aae-4f8f-a38b-2debce3a8e64\") " pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.883336 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-dns-svc\") pod \"dnsmasq-dns-55f844cf75-xk8rt\" (UID: \"e5ac966d-0aae-4f8f-a38b-2debce3a8e64\") " pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.883361 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a89115ab-f300-433f-934e-dce679bf1877-httpd-config\") pod \"neutron-596dd8d85b-r59nh\" (UID: \"a89115ab-f300-433f-934e-dce679bf1877\") " pod="openstack/neutron-596dd8d85b-r59nh" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.883385 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-config\") pod \"dnsmasq-dns-55f844cf75-xk8rt\" (UID: \"e5ac966d-0aae-4f8f-a38b-2debce3a8e64\") " pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.883401 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-xk8rt\" (UID: \"e5ac966d-0aae-4f8f-a38b-2debce3a8e64\") " pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.883416 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-xk8rt\" (UID: \"e5ac966d-0aae-4f8f-a38b-2debce3a8e64\") " pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.883436 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a89115ab-f300-433f-934e-dce679bf1877-combined-ca-bundle\") pod \"neutron-596dd8d85b-r59nh\" (UID: \"a89115ab-f300-433f-934e-dce679bf1877\") " pod="openstack/neutron-596dd8d85b-r59nh" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.883470 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a89115ab-f300-433f-934e-dce679bf1877-ovndb-tls-certs\") pod \"neutron-596dd8d85b-r59nh\" (UID: \"a89115ab-f300-433f-934e-dce679bf1877\") " pod="openstack/neutron-596dd8d85b-r59nh" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.883501 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-xk8rt\" (UID: \"e5ac966d-0aae-4f8f-a38b-2debce3a8e64\") " pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.883522 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a89115ab-f300-433f-934e-dce679bf1877-config\") pod \"neutron-596dd8d85b-r59nh\" (UID: \"a89115ab-f300-433f-934e-dce679bf1877\") " pod="openstack/neutron-596dd8d85b-r59nh" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.883542 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28mbx\" (UniqueName: \"kubernetes.io/projected/a89115ab-f300-433f-934e-dce679bf1877-kube-api-access-28mbx\") pod \"neutron-596dd8d85b-r59nh\" (UID: \"a89115ab-f300-433f-934e-dce679bf1877\") " pod="openstack/neutron-596dd8d85b-r59nh" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.885052 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-dns-svc\") pod \"dnsmasq-dns-55f844cf75-xk8rt\" (UID: \"e5ac966d-0aae-4f8f-a38b-2debce3a8e64\") " pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.886244 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-xk8rt\" (UID: \"e5ac966d-0aae-4f8f-a38b-2debce3a8e64\") " pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.888903 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-xk8rt\" (UID: \"e5ac966d-0aae-4f8f-a38b-2debce3a8e64\") " pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.889479 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-config\") pod \"dnsmasq-dns-55f844cf75-xk8rt\" (UID: \"e5ac966d-0aae-4f8f-a38b-2debce3a8e64\") " pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.892607 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a89115ab-f300-433f-934e-dce679bf1877-httpd-config\") pod \"neutron-596dd8d85b-r59nh\" (UID: \"a89115ab-f300-433f-934e-dce679bf1877\") " pod="openstack/neutron-596dd8d85b-r59nh" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.893124 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a89115ab-f300-433f-934e-dce679bf1877-combined-ca-bundle\") pod \"neutron-596dd8d85b-r59nh\" (UID: \"a89115ab-f300-433f-934e-dce679bf1877\") " pod="openstack/neutron-596dd8d85b-r59nh" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.895084 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a89115ab-f300-433f-934e-dce679bf1877-config\") pod \"neutron-596dd8d85b-r59nh\" (UID: \"a89115ab-f300-433f-934e-dce679bf1877\") " pod="openstack/neutron-596dd8d85b-r59nh" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.898489 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a89115ab-f300-433f-934e-dce679bf1877-ovndb-tls-certs\") pod \"neutron-596dd8d85b-r59nh\" (UID: \"a89115ab-f300-433f-934e-dce679bf1877\") " pod="openstack/neutron-596dd8d85b-r59nh" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.899130 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-xk8rt\" (UID: \"e5ac966d-0aae-4f8f-a38b-2debce3a8e64\") " pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.901504 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28mbx\" (UniqueName: \"kubernetes.io/projected/a89115ab-f300-433f-934e-dce679bf1877-kube-api-access-28mbx\") pod \"neutron-596dd8d85b-r59nh\" (UID: \"a89115ab-f300-433f-934e-dce679bf1877\") " pod="openstack/neutron-596dd8d85b-r59nh" Dec 05 19:24:37 crc kubenswrapper[4828]: I1205 19:24:37.907615 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psxsh\" (UniqueName: \"kubernetes.io/projected/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-kube-api-access-psxsh\") pod \"dnsmasq-dns-55f844cf75-xk8rt\" (UID: \"e5ac966d-0aae-4f8f-a38b-2debce3a8e64\") " pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" Dec 05 19:24:38 crc kubenswrapper[4828]: I1205 19:24:38.081450 4828 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 19:24:38 crc kubenswrapper[4828]: I1205 19:24:38.082178 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 05 19:24:38 crc kubenswrapper[4828]: I1205 19:24:38.082203 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 05 19:24:38 crc kubenswrapper[4828]: I1205 19:24:38.143651 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" Dec 05 19:24:38 crc kubenswrapper[4828]: I1205 19:24:38.173014 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-596dd8d85b-r59nh" Dec 05 19:24:39 crc kubenswrapper[4828]: I1205 19:24:39.196671 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-b878f489-x2jpv" Dec 05 19:24:40 crc kubenswrapper[4828]: I1205 19:24:40.102077 4828 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 19:24:40 crc kubenswrapper[4828]: I1205 19:24:40.102375 4828 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 19:24:40 crc kubenswrapper[4828]: I1205 19:24:40.238686 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 05 19:24:40 crc kubenswrapper[4828]: I1205 19:24:40.238812 4828 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 19:24:40 crc kubenswrapper[4828]: I1205 19:24:40.591177 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6dc6b5c7cf-8mlhh"] Dec 05 19:24:40 crc kubenswrapper[4828]: I1205 19:24:40.592876 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6dc6b5c7cf-8mlhh" Dec 05 19:24:40 crc kubenswrapper[4828]: I1205 19:24:40.599576 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 05 19:24:40 crc kubenswrapper[4828]: I1205 19:24:40.599791 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 05 19:24:40 crc kubenswrapper[4828]: I1205 19:24:40.606991 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6dc6b5c7cf-8mlhh"] Dec 05 19:24:40 crc kubenswrapper[4828]: I1205 19:24:40.689105 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f336b9b2-7051-43e5-8b10-ad9cab15c947-ovndb-tls-certs\") pod \"neutron-6dc6b5c7cf-8mlhh\" (UID: \"f336b9b2-7051-43e5-8b10-ad9cab15c947\") " pod="openstack/neutron-6dc6b5c7cf-8mlhh" Dec 05 19:24:40 crc kubenswrapper[4828]: I1205 19:24:40.689147 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f336b9b2-7051-43e5-8b10-ad9cab15c947-httpd-config\") pod \"neutron-6dc6b5c7cf-8mlhh\" (UID: \"f336b9b2-7051-43e5-8b10-ad9cab15c947\") " pod="openstack/neutron-6dc6b5c7cf-8mlhh" Dec 05 19:24:40 crc kubenswrapper[4828]: I1205 19:24:40.689186 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f336b9b2-7051-43e5-8b10-ad9cab15c947-internal-tls-certs\") pod \"neutron-6dc6b5c7cf-8mlhh\" (UID: \"f336b9b2-7051-43e5-8b10-ad9cab15c947\") " pod="openstack/neutron-6dc6b5c7cf-8mlhh" Dec 05 19:24:40 crc kubenswrapper[4828]: I1205 19:24:40.689232 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f336b9b2-7051-43e5-8b10-ad9cab15c947-combined-ca-bundle\") pod \"neutron-6dc6b5c7cf-8mlhh\" (UID: \"f336b9b2-7051-43e5-8b10-ad9cab15c947\") " pod="openstack/neutron-6dc6b5c7cf-8mlhh" Dec 05 19:24:40 crc kubenswrapper[4828]: I1205 19:24:40.689250 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f336b9b2-7051-43e5-8b10-ad9cab15c947-config\") pod \"neutron-6dc6b5c7cf-8mlhh\" (UID: \"f336b9b2-7051-43e5-8b10-ad9cab15c947\") " pod="openstack/neutron-6dc6b5c7cf-8mlhh" Dec 05 19:24:40 crc kubenswrapper[4828]: I1205 19:24:40.689378 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg2pt\" (UniqueName: \"kubernetes.io/projected/f336b9b2-7051-43e5-8b10-ad9cab15c947-kube-api-access-pg2pt\") pod \"neutron-6dc6b5c7cf-8mlhh\" (UID: \"f336b9b2-7051-43e5-8b10-ad9cab15c947\") " pod="openstack/neutron-6dc6b5c7cf-8mlhh" Dec 05 19:24:40 crc kubenswrapper[4828]: I1205 19:24:40.689468 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f336b9b2-7051-43e5-8b10-ad9cab15c947-public-tls-certs\") pod \"neutron-6dc6b5c7cf-8mlhh\" (UID: \"f336b9b2-7051-43e5-8b10-ad9cab15c947\") " pod="openstack/neutron-6dc6b5c7cf-8mlhh" Dec 05 19:24:40 crc kubenswrapper[4828]: I1205 19:24:40.794092 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f336b9b2-7051-43e5-8b10-ad9cab15c947-httpd-config\") pod \"neutron-6dc6b5c7cf-8mlhh\" (UID: \"f336b9b2-7051-43e5-8b10-ad9cab15c947\") " pod="openstack/neutron-6dc6b5c7cf-8mlhh" Dec 05 19:24:40 crc kubenswrapper[4828]: I1205 19:24:40.794181 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f336b9b2-7051-43e5-8b10-ad9cab15c947-internal-tls-certs\") pod \"neutron-6dc6b5c7cf-8mlhh\" (UID: \"f336b9b2-7051-43e5-8b10-ad9cab15c947\") " pod="openstack/neutron-6dc6b5c7cf-8mlhh" Dec 05 19:24:40 crc kubenswrapper[4828]: I1205 19:24:40.794223 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f336b9b2-7051-43e5-8b10-ad9cab15c947-combined-ca-bundle\") pod \"neutron-6dc6b5c7cf-8mlhh\" (UID: \"f336b9b2-7051-43e5-8b10-ad9cab15c947\") " pod="openstack/neutron-6dc6b5c7cf-8mlhh" Dec 05 19:24:40 crc kubenswrapper[4828]: I1205 19:24:40.794247 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f336b9b2-7051-43e5-8b10-ad9cab15c947-config\") pod \"neutron-6dc6b5c7cf-8mlhh\" (UID: \"f336b9b2-7051-43e5-8b10-ad9cab15c947\") " pod="openstack/neutron-6dc6b5c7cf-8mlhh" Dec 05 19:24:40 crc kubenswrapper[4828]: I1205 19:24:40.794290 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg2pt\" (UniqueName: \"kubernetes.io/projected/f336b9b2-7051-43e5-8b10-ad9cab15c947-kube-api-access-pg2pt\") pod \"neutron-6dc6b5c7cf-8mlhh\" (UID: \"f336b9b2-7051-43e5-8b10-ad9cab15c947\") " pod="openstack/neutron-6dc6b5c7cf-8mlhh" Dec 05 19:24:40 crc kubenswrapper[4828]: I1205 19:24:40.794333 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f336b9b2-7051-43e5-8b10-ad9cab15c947-public-tls-certs\") pod \"neutron-6dc6b5c7cf-8mlhh\" (UID: \"f336b9b2-7051-43e5-8b10-ad9cab15c947\") " pod="openstack/neutron-6dc6b5c7cf-8mlhh" Dec 05 19:24:40 crc kubenswrapper[4828]: I1205 19:24:40.794450 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f336b9b2-7051-43e5-8b10-ad9cab15c947-ovndb-tls-certs\") pod \"neutron-6dc6b5c7cf-8mlhh\" (UID: \"f336b9b2-7051-43e5-8b10-ad9cab15c947\") " pod="openstack/neutron-6dc6b5c7cf-8mlhh" Dec 05 19:24:41 crc kubenswrapper[4828]: I1205 19:24:41.086210 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f336b9b2-7051-43e5-8b10-ad9cab15c947-config\") pod \"neutron-6dc6b5c7cf-8mlhh\" (UID: \"f336b9b2-7051-43e5-8b10-ad9cab15c947\") " pod="openstack/neutron-6dc6b5c7cf-8mlhh" Dec 05 19:24:41 crc kubenswrapper[4828]: I1205 19:24:41.086240 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg2pt\" (UniqueName: \"kubernetes.io/projected/f336b9b2-7051-43e5-8b10-ad9cab15c947-kube-api-access-pg2pt\") pod \"neutron-6dc6b5c7cf-8mlhh\" (UID: \"f336b9b2-7051-43e5-8b10-ad9cab15c947\") " pod="openstack/neutron-6dc6b5c7cf-8mlhh" Dec 05 19:24:41 crc kubenswrapper[4828]: I1205 19:24:41.086725 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f336b9b2-7051-43e5-8b10-ad9cab15c947-public-tls-certs\") pod \"neutron-6dc6b5c7cf-8mlhh\" (UID: \"f336b9b2-7051-43e5-8b10-ad9cab15c947\") " pod="openstack/neutron-6dc6b5c7cf-8mlhh" Dec 05 19:24:41 crc kubenswrapper[4828]: I1205 19:24:41.091420 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f336b9b2-7051-43e5-8b10-ad9cab15c947-ovndb-tls-certs\") pod \"neutron-6dc6b5c7cf-8mlhh\" (UID: \"f336b9b2-7051-43e5-8b10-ad9cab15c947\") " pod="openstack/neutron-6dc6b5c7cf-8mlhh" Dec 05 19:24:41 crc kubenswrapper[4828]: I1205 19:24:41.094904 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f336b9b2-7051-43e5-8b10-ad9cab15c947-internal-tls-certs\") pod \"neutron-6dc6b5c7cf-8mlhh\" (UID: \"f336b9b2-7051-43e5-8b10-ad9cab15c947\") " pod="openstack/neutron-6dc6b5c7cf-8mlhh" Dec 05 19:24:41 crc kubenswrapper[4828]: I1205 19:24:41.105671 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f336b9b2-7051-43e5-8b10-ad9cab15c947-combined-ca-bundle\") pod \"neutron-6dc6b5c7cf-8mlhh\" (UID: \"f336b9b2-7051-43e5-8b10-ad9cab15c947\") " pod="openstack/neutron-6dc6b5c7cf-8mlhh" Dec 05 19:24:41 crc kubenswrapper[4828]: I1205 19:24:41.108147 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f336b9b2-7051-43e5-8b10-ad9cab15c947-httpd-config\") pod \"neutron-6dc6b5c7cf-8mlhh\" (UID: \"f336b9b2-7051-43e5-8b10-ad9cab15c947\") " pod="openstack/neutron-6dc6b5c7cf-8mlhh" Dec 05 19:24:41 crc kubenswrapper[4828]: I1205 19:24:41.277569 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 05 19:24:41 crc kubenswrapper[4828]: I1205 19:24:41.277684 4828 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 19:24:41 crc kubenswrapper[4828]: I1205 19:24:41.291332 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 05 19:24:41 crc kubenswrapper[4828]: I1205 19:24:41.383022 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6dc6b5c7cf-8mlhh" Dec 05 19:24:41 crc kubenswrapper[4828]: I1205 19:24:41.533121 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 05 19:24:42 crc kubenswrapper[4828]: I1205 19:24:42.070740 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-596dd8d85b-r59nh"] Dec 05 19:24:42 crc kubenswrapper[4828]: I1205 19:24:42.133039 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-596dd8d85b-r59nh" event={"ID":"a89115ab-f300-433f-934e-dce679bf1877","Type":"ContainerStarted","Data":"b812090d9f30edb16dad00f569ea2264fe06cefa3ebe3d334a7dcc455f1f6807"} Dec 05 19:24:42 crc kubenswrapper[4828]: I1205 19:24:42.308404 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6bfcc469f6-vtpj6"] Dec 05 19:24:42 crc kubenswrapper[4828]: I1205 19:24:42.331931 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-89dcf679f-97rfx"] Dec 05 19:24:42 crc kubenswrapper[4828]: W1205 19:24:42.376115 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9e67cf9_61e2_43a1_867a_a8f97ada16a4.slice/crio-d22badf6d21269f3ba45b88a73ee6ecc0c51ee08317f800a95ed9f973e23f283 WatchSource:0}: Error finding container d22badf6d21269f3ba45b88a73ee6ecc0c51ee08317f800a95ed9f973e23f283: Status 404 returned error can't find the container with id d22badf6d21269f3ba45b88a73ee6ecc0c51ee08317f800a95ed9f973e23f283 Dec 05 19:24:42 crc kubenswrapper[4828]: I1205 19:24:42.427317 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-xk8rt"] Dec 05 19:24:42 crc kubenswrapper[4828]: I1205 19:24:42.666296 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6dc6b5c7cf-8mlhh"] Dec 05 19:24:43 crc kubenswrapper[4828]: I1205 19:24:43.153534 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-89dcf679f-97rfx" event={"ID":"49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4","Type":"ContainerStarted","Data":"752ce62d0257a9e9c6ca8a50df335034a3a16bbf0c8144f816714aa6ac4cf36f"} Dec 05 19:24:43 crc kubenswrapper[4828]: I1205 19:24:43.163736 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72755327-9414-46f2-b3ed-d19120b5876e","Type":"ContainerStarted","Data":"77de86682baa95c4d0a6a57993055952aa8c31ae12d2e89011ce41c64b84011e"} Dec 05 19:24:43 crc kubenswrapper[4828]: I1205 19:24:43.170056 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bfcc469f6-vtpj6" event={"ID":"a9e67cf9-61e2-43a1-867a-a8f97ada16a4","Type":"ContainerStarted","Data":"d22badf6d21269f3ba45b88a73ee6ecc0c51ee08317f800a95ed9f973e23f283"} Dec 05 19:24:43 crc kubenswrapper[4828]: I1205 19:24:43.171627 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" event={"ID":"e5ac966d-0aae-4f8f-a38b-2debce3a8e64","Type":"ContainerStarted","Data":"145602531f3b887683b01a346401fada53871fdd879deef6382e343a73f56727"} Dec 05 19:24:43 crc kubenswrapper[4828]: I1205 19:24:43.181265 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dc6b5c7cf-8mlhh" event={"ID":"f336b9b2-7051-43e5-8b10-ad9cab15c947","Type":"ContainerStarted","Data":"64e0cd689c0b0251aaee4d238c748cf944cafee6765d22aa7d1cd7150ea8ea9b"} Dec 05 19:24:43 crc kubenswrapper[4828]: I1205 19:24:43.195440 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-596dd8d85b-r59nh" event={"ID":"a89115ab-f300-433f-934e-dce679bf1877","Type":"ContainerStarted","Data":"8b38dbdcc9480ed1ff926da8258e3e3ea50c6e23206ed3a90de306872e659f34"} Dec 05 19:24:43 crc kubenswrapper[4828]: I1205 19:24:43.200002 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9gl2x" event={"ID":"8d22bcf5-bf39-4595-8742-5d8c3018e7bf","Type":"ContainerStarted","Data":"5ee1753ab1252c0568398983c822d650048b764991a2e859706c769c71ba2df1"} Dec 05 19:24:43 crc kubenswrapper[4828]: I1205 19:24:43.217460 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-9gl2x" podStartSLOduration=3.2705016860000002 podStartE2EDuration="47.217446168s" podCreationTimestamp="2025-12-05 19:23:56 +0000 UTC" firstStartedPulling="2025-12-05 19:23:57.60610705 +0000 UTC m=+1215.501329356" lastFinishedPulling="2025-12-05 19:24:41.553051532 +0000 UTC m=+1259.448273838" observedRunningTime="2025-12-05 19:24:43.216096291 +0000 UTC m=+1261.111318597" watchObservedRunningTime="2025-12-05 19:24:43.217446168 +0000 UTC m=+1261.112668474" Dec 05 19:24:44 crc kubenswrapper[4828]: I1205 19:24:44.221158 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7gmm7" event={"ID":"ffc75dac-d7b0-41ce-ac4d-94f251036f95","Type":"ContainerStarted","Data":"d57d2dd93f88a88fabcd4e6c356fb068468d4b77d5c7702115c0c843355a345b"} Dec 05 19:24:44 crc kubenswrapper[4828]: I1205 19:24:44.230707 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bfcc469f6-vtpj6" event={"ID":"a9e67cf9-61e2-43a1-867a-a8f97ada16a4","Type":"ContainerStarted","Data":"3209ecfd00e0fe3671e0ee6001591bd243a2a2a6885599cbeeb921f899c024e0"} Dec 05 19:24:44 crc kubenswrapper[4828]: I1205 19:24:44.230765 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bfcc469f6-vtpj6" event={"ID":"a9e67cf9-61e2-43a1-867a-a8f97ada16a4","Type":"ContainerStarted","Data":"785b2a65f47b157bd4df78e12db9ea4f31cfe7088f0c5a0c4a71152d4adefe24"} Dec 05 19:24:44 crc kubenswrapper[4828]: I1205 19:24:44.231954 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6bfcc469f6-vtpj6" Dec 05 19:24:44 crc kubenswrapper[4828]: I1205 19:24:44.231991 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6bfcc469f6-vtpj6" Dec 05 19:24:44 crc kubenswrapper[4828]: I1205 19:24:44.248777 4828 generic.go:334] "Generic (PLEG): container finished" podID="e5ac966d-0aae-4f8f-a38b-2debce3a8e64" containerID="7ae90cf6690274ea287c9ef115d59521397089e7ad3294402b51921eea17c098" exitCode=0 Dec 05 19:24:44 crc kubenswrapper[4828]: I1205 19:24:44.248888 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" event={"ID":"e5ac966d-0aae-4f8f-a38b-2debce3a8e64","Type":"ContainerDied","Data":"7ae90cf6690274ea287c9ef115d59521397089e7ad3294402b51921eea17c098"} Dec 05 19:24:44 crc kubenswrapper[4828]: I1205 19:24:44.255716 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dc6b5c7cf-8mlhh" event={"ID":"f336b9b2-7051-43e5-8b10-ad9cab15c947","Type":"ContainerStarted","Data":"97dd61f0363731ea21ce23f0790fba4de36af722877a343fec3e44bde5da22c6"} Dec 05 19:24:44 crc kubenswrapper[4828]: I1205 19:24:44.256077 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dc6b5c7cf-8mlhh" event={"ID":"f336b9b2-7051-43e5-8b10-ad9cab15c947","Type":"ContainerStarted","Data":"12aebda391356511d1a9bd85e0d5e34a35cef1d2aad482bc28fc595e3e25fd3b"} Dec 05 19:24:44 crc kubenswrapper[4828]: I1205 19:24:44.257071 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6dc6b5c7cf-8mlhh" Dec 05 19:24:44 crc kubenswrapper[4828]: I1205 19:24:44.259905 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-596dd8d85b-r59nh" event={"ID":"a89115ab-f300-433f-934e-dce679bf1877","Type":"ContainerStarted","Data":"66beaa93c902cca0bf08c8dbbf7ea351d4526dadc4e93efeaefa2d3e1b495cf1"} Dec 05 19:24:44 crc kubenswrapper[4828]: I1205 19:24:44.260687 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-596dd8d85b-r59nh" Dec 05 19:24:44 crc kubenswrapper[4828]: I1205 19:24:44.276114 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-89dcf679f-97rfx" event={"ID":"49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4","Type":"ContainerStarted","Data":"724183faf031b6893e4013f6f0585ebee909e8f65a83792532990cefd5db90d7"} Dec 05 19:24:44 crc kubenswrapper[4828]: I1205 19:24:44.276710 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-89dcf679f-97rfx" Dec 05 19:24:44 crc kubenswrapper[4828]: I1205 19:24:44.283704 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-7gmm7" podStartSLOduration=3.961490963 podStartE2EDuration="49.283679891s" podCreationTimestamp="2025-12-05 19:23:55 +0000 UTC" firstStartedPulling="2025-12-05 19:23:57.440036659 +0000 UTC m=+1215.335258965" lastFinishedPulling="2025-12-05 19:24:42.762225587 +0000 UTC m=+1260.657447893" observedRunningTime="2025-12-05 19:24:44.273012881 +0000 UTC m=+1262.168235207" watchObservedRunningTime="2025-12-05 19:24:44.283679891 +0000 UTC m=+1262.178902197" Dec 05 19:24:44 crc kubenswrapper[4828]: I1205 19:24:44.324468 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6bfcc469f6-vtpj6" podStartSLOduration=7.324446715 podStartE2EDuration="7.324446715s" podCreationTimestamp="2025-12-05 19:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:24:44.313799466 +0000 UTC m=+1262.209021792" watchObservedRunningTime="2025-12-05 19:24:44.324446715 +0000 UTC m=+1262.219669021" Dec 05 19:24:44 crc kubenswrapper[4828]: I1205 19:24:44.353014 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6dc6b5c7cf-8mlhh" podStartSLOduration=4.352992768 podStartE2EDuration="4.352992768s" podCreationTimestamp="2025-12-05 19:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:24:44.339276226 +0000 UTC m=+1262.234498522" watchObservedRunningTime="2025-12-05 19:24:44.352992768 +0000 UTC m=+1262.248215064" Dec 05 19:24:44 crc kubenswrapper[4828]: I1205 19:24:44.433705 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-596dd8d85b-r59nh" podStartSLOduration=7.433670814 podStartE2EDuration="7.433670814s" podCreationTimestamp="2025-12-05 19:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:24:44.427370292 +0000 UTC m=+1262.322592608" watchObservedRunningTime="2025-12-05 19:24:44.433670814 +0000 UTC m=+1262.328893120" Dec 05 19:24:44 crc kubenswrapper[4828]: I1205 19:24:44.458879 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-89dcf679f-97rfx" podStartSLOduration=7.458815265 podStartE2EDuration="7.458815265s" podCreationTimestamp="2025-12-05 19:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:24:44.452865954 +0000 UTC m=+1262.348088260" watchObservedRunningTime="2025-12-05 19:24:44.458815265 +0000 UTC m=+1262.354037571" Dec 05 19:24:45 crc kubenswrapper[4828]: I1205 19:24:45.294504 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" event={"ID":"e5ac966d-0aae-4f8f-a38b-2debce3a8e64","Type":"ContainerStarted","Data":"ba2fdb4af29467ade541b486f75fe8d0339ef5335c74e270382607186258287b"} Dec 05 19:24:45 crc kubenswrapper[4828]: I1205 19:24:45.295975 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" Dec 05 19:24:45 crc kubenswrapper[4828]: I1205 19:24:45.312790 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" podStartSLOduration=8.312774347 podStartE2EDuration="8.312774347s" podCreationTimestamp="2025-12-05 19:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:24:45.311978245 +0000 UTC m=+1263.207200551" watchObservedRunningTime="2025-12-05 19:24:45.312774347 +0000 UTC m=+1263.207996653" Dec 05 19:24:46 crc kubenswrapper[4828]: I1205 19:24:46.153299 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-699b69c564-442lb" podUID="74df4612-463b-4b3c-8f2d-7dbb9494d6fe" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 05 19:24:46 crc kubenswrapper[4828]: I1205 19:24:46.248794 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-594b9fb44-r9zh6" podUID="99c01665-feb9-49f7-a97a-b6e6d87dc991" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Dec 05 19:24:46 crc kubenswrapper[4828]: I1205 19:24:46.303508 4828 generic.go:334] "Generic (PLEG): container finished" podID="8d22bcf5-bf39-4595-8742-5d8c3018e7bf" containerID="5ee1753ab1252c0568398983c822d650048b764991a2e859706c769c71ba2df1" exitCode=0 Dec 05 19:24:46 crc kubenswrapper[4828]: I1205 19:24:46.303673 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9gl2x" event={"ID":"8d22bcf5-bf39-4595-8742-5d8c3018e7bf","Type":"ContainerDied","Data":"5ee1753ab1252c0568398983c822d650048b764991a2e859706c769c71ba2df1"} Dec 05 19:24:53 crc kubenswrapper[4828]: I1205 19:24:53.145098 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" Dec 05 19:24:53 crc kubenswrapper[4828]: I1205 19:24:53.215050 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-khz2q"] Dec 05 19:24:53 crc kubenswrapper[4828]: I1205 19:24:53.219222 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" podUID="1dd4db34-a61b-4b8b-bc81-3458dfd1491b" containerName="dnsmasq-dns" containerID="cri-o://948bf1168f1fe5e8ef0204bb17b2697b5a80db6cb78befbc95abf38a7ba890c7" gracePeriod=10 Dec 05 19:24:53 crc kubenswrapper[4828]: I1205 19:24:53.352264 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9gl2x" Dec 05 19:24:53 crc kubenswrapper[4828]: I1205 19:24:53.402565 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9gl2x" event={"ID":"8d22bcf5-bf39-4595-8742-5d8c3018e7bf","Type":"ContainerDied","Data":"7e58f6cef8690c5dcd8fac03cdb31dfe8917f7ded35164b6a6fd2373818165d2"} Dec 05 19:24:53 crc kubenswrapper[4828]: I1205 19:24:53.402606 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e58f6cef8690c5dcd8fac03cdb31dfe8917f7ded35164b6a6fd2373818165d2" Dec 05 19:24:53 crc kubenswrapper[4828]: I1205 19:24:53.402665 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9gl2x" Dec 05 19:24:53 crc kubenswrapper[4828]: I1205 19:24:53.411893 4828 generic.go:334] "Generic (PLEG): container finished" podID="ffc75dac-d7b0-41ce-ac4d-94f251036f95" containerID="d57d2dd93f88a88fabcd4e6c356fb068468d4b77d5c7702115c0c843355a345b" exitCode=0 Dec 05 19:24:53 crc kubenswrapper[4828]: I1205 19:24:53.411946 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7gmm7" event={"ID":"ffc75dac-d7b0-41ce-ac4d-94f251036f95","Type":"ContainerDied","Data":"d57d2dd93f88a88fabcd4e6c356fb068468d4b77d5c7702115c0c843355a345b"} Dec 05 19:24:53 crc kubenswrapper[4828]: I1205 19:24:53.413481 4828 generic.go:334] "Generic (PLEG): container finished" podID="1dd4db34-a61b-4b8b-bc81-3458dfd1491b" containerID="948bf1168f1fe5e8ef0204bb17b2697b5a80db6cb78befbc95abf38a7ba890c7" exitCode=0 Dec 05 19:24:53 crc kubenswrapper[4828]: I1205 19:24:53.413513 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" event={"ID":"1dd4db34-a61b-4b8b-bc81-3458dfd1491b","Type":"ContainerDied","Data":"948bf1168f1fe5e8ef0204bb17b2697b5a80db6cb78befbc95abf38a7ba890c7"} Dec 05 19:24:53 crc kubenswrapper[4828]: I1205 19:24:53.467699 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8d22bcf5-bf39-4595-8742-5d8c3018e7bf-db-sync-config-data\") pod \"8d22bcf5-bf39-4595-8742-5d8c3018e7bf\" (UID: \"8d22bcf5-bf39-4595-8742-5d8c3018e7bf\") " Dec 05 19:24:53 crc kubenswrapper[4828]: I1205 19:24:53.467784 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d22bcf5-bf39-4595-8742-5d8c3018e7bf-combined-ca-bundle\") pod \"8d22bcf5-bf39-4595-8742-5d8c3018e7bf\" (UID: \"8d22bcf5-bf39-4595-8742-5d8c3018e7bf\") " Dec 05 19:24:53 crc kubenswrapper[4828]: I1205 19:24:53.467807 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ht28x\" (UniqueName: \"kubernetes.io/projected/8d22bcf5-bf39-4595-8742-5d8c3018e7bf-kube-api-access-ht28x\") pod \"8d22bcf5-bf39-4595-8742-5d8c3018e7bf\" (UID: \"8d22bcf5-bf39-4595-8742-5d8c3018e7bf\") " Dec 05 19:24:53 crc kubenswrapper[4828]: I1205 19:24:53.474188 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d22bcf5-bf39-4595-8742-5d8c3018e7bf-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8d22bcf5-bf39-4595-8742-5d8c3018e7bf" (UID: "8d22bcf5-bf39-4595-8742-5d8c3018e7bf"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:53 crc kubenswrapper[4828]: I1205 19:24:53.478278 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d22bcf5-bf39-4595-8742-5d8c3018e7bf-kube-api-access-ht28x" (OuterVolumeSpecName: "kube-api-access-ht28x") pod "8d22bcf5-bf39-4595-8742-5d8c3018e7bf" (UID: "8d22bcf5-bf39-4595-8742-5d8c3018e7bf"). InnerVolumeSpecName "kube-api-access-ht28x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:24:53 crc kubenswrapper[4828]: I1205 19:24:53.496623 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d22bcf5-bf39-4595-8742-5d8c3018e7bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d22bcf5-bf39-4595-8742-5d8c3018e7bf" (UID: "8d22bcf5-bf39-4595-8742-5d8c3018e7bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:53 crc kubenswrapper[4828]: I1205 19:24:53.570118 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d22bcf5-bf39-4595-8742-5d8c3018e7bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:53 crc kubenswrapper[4828]: I1205 19:24:53.570319 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ht28x\" (UniqueName: \"kubernetes.io/projected/8d22bcf5-bf39-4595-8742-5d8c3018e7bf-kube-api-access-ht28x\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:53 crc kubenswrapper[4828]: I1205 19:24:53.570330 4828 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8d22bcf5-bf39-4595-8742-5d8c3018e7bf-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.425365 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" event={"ID":"1dd4db34-a61b-4b8b-bc81-3458dfd1491b","Type":"ContainerDied","Data":"9b34d2cf392c5c4de512dbfab7925c4c7de5ac6f08b792c174dee887ce5dc8b3"} Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.425652 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b34d2cf392c5c4de512dbfab7925c4c7de5ac6f08b792c174dee887ce5dc8b3" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.433777 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" Dec 05 19:24:54 crc kubenswrapper[4828]: E1205 19:24:54.536025 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="72755327-9414-46f2-b3ed-d19120b5876e" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.591353 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64hfq\" (UniqueName: \"kubernetes.io/projected/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-kube-api-access-64hfq\") pod \"1dd4db34-a61b-4b8b-bc81-3458dfd1491b\" (UID: \"1dd4db34-a61b-4b8b-bc81-3458dfd1491b\") " Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.591461 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-config\") pod \"1dd4db34-a61b-4b8b-bc81-3458dfd1491b\" (UID: \"1dd4db34-a61b-4b8b-bc81-3458dfd1491b\") " Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.591505 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-ovsdbserver-nb\") pod \"1dd4db34-a61b-4b8b-bc81-3458dfd1491b\" (UID: \"1dd4db34-a61b-4b8b-bc81-3458dfd1491b\") " Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.591626 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-dns-swift-storage-0\") pod \"1dd4db34-a61b-4b8b-bc81-3458dfd1491b\" (UID: \"1dd4db34-a61b-4b8b-bc81-3458dfd1491b\") " Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.591691 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-ovsdbserver-sb\") pod \"1dd4db34-a61b-4b8b-bc81-3458dfd1491b\" (UID: \"1dd4db34-a61b-4b8b-bc81-3458dfd1491b\") " Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.591740 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-dns-svc\") pod \"1dd4db34-a61b-4b8b-bc81-3458dfd1491b\" (UID: \"1dd4db34-a61b-4b8b-bc81-3458dfd1491b\") " Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.632257 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-kube-api-access-64hfq" (OuterVolumeSpecName: "kube-api-access-64hfq") pod "1dd4db34-a61b-4b8b-bc81-3458dfd1491b" (UID: "1dd4db34-a61b-4b8b-bc81-3458dfd1491b"). InnerVolumeSpecName "kube-api-access-64hfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.663159 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-c9967b7f4-tjx24"] Dec 05 19:24:54 crc kubenswrapper[4828]: E1205 19:24:54.663578 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dd4db34-a61b-4b8b-bc81-3458dfd1491b" containerName="dnsmasq-dns" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.663589 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dd4db34-a61b-4b8b-bc81-3458dfd1491b" containerName="dnsmasq-dns" Dec 05 19:24:54 crc kubenswrapper[4828]: E1205 19:24:54.663603 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dd4db34-a61b-4b8b-bc81-3458dfd1491b" containerName="init" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.663609 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dd4db34-a61b-4b8b-bc81-3458dfd1491b" containerName="init" Dec 05 19:24:54 crc kubenswrapper[4828]: E1205 19:24:54.663619 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d22bcf5-bf39-4595-8742-5d8c3018e7bf" containerName="barbican-db-sync" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.663626 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d22bcf5-bf39-4595-8742-5d8c3018e7bf" containerName="barbican-db-sync" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.663798 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d22bcf5-bf39-4595-8742-5d8c3018e7bf" containerName="barbican-db-sync" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.663834 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dd4db34-a61b-4b8b-bc81-3458dfd1491b" containerName="dnsmasq-dns" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.664711 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-c9967b7f4-tjx24" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.671333 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.671641 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.671779 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-zllbn" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.677544 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6d6b94f97f-2m6zj"] Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.680224 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6d6b94f97f-2m6zj" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.685385 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.692004 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-c9967b7f4-tjx24"] Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.694335 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64hfq\" (UniqueName: \"kubernetes.io/projected/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-kube-api-access-64hfq\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.711728 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1dd4db34-a61b-4b8b-bc81-3458dfd1491b" (UID: "1dd4db34-a61b-4b8b-bc81-3458dfd1491b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.731892 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6d6b94f97f-2m6zj"] Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.748165 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1dd4db34-a61b-4b8b-bc81-3458dfd1491b" (UID: "1dd4db34-a61b-4b8b-bc81-3458dfd1491b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.772475 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-config" (OuterVolumeSpecName: "config") pod "1dd4db34-a61b-4b8b-bc81-3458dfd1491b" (UID: "1dd4db34-a61b-4b8b-bc81-3458dfd1491b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.800160 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba560005-dff7-4d93-b2aa-58d922405ff3-combined-ca-bundle\") pod \"barbican-worker-6d6b94f97f-2m6zj\" (UID: \"ba560005-dff7-4d93-b2aa-58d922405ff3\") " pod="openstack/barbican-worker-6d6b94f97f-2m6zj" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.800262 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba560005-dff7-4d93-b2aa-58d922405ff3-config-data\") pod \"barbican-worker-6d6b94f97f-2m6zj\" (UID: \"ba560005-dff7-4d93-b2aa-58d922405ff3\") " pod="openstack/barbican-worker-6d6b94f97f-2m6zj" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.800315 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msmzh\" (UniqueName: \"kubernetes.io/projected/e1a17074-48bc-4f34-8a44-dd1321ff8fc1-kube-api-access-msmzh\") pod \"barbican-keystone-listener-c9967b7f4-tjx24\" (UID: \"e1a17074-48bc-4f34-8a44-dd1321ff8fc1\") " pod="openstack/barbican-keystone-listener-c9967b7f4-tjx24" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.800357 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1a17074-48bc-4f34-8a44-dd1321ff8fc1-config-data-custom\") pod \"barbican-keystone-listener-c9967b7f4-tjx24\" (UID: \"e1a17074-48bc-4f34-8a44-dd1321ff8fc1\") " pod="openstack/barbican-keystone-listener-c9967b7f4-tjx24" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.800394 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba560005-dff7-4d93-b2aa-58d922405ff3-config-data-custom\") pod \"barbican-worker-6d6b94f97f-2m6zj\" (UID: \"ba560005-dff7-4d93-b2aa-58d922405ff3\") " pod="openstack/barbican-worker-6d6b94f97f-2m6zj" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.800434 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1a17074-48bc-4f34-8a44-dd1321ff8fc1-logs\") pod \"barbican-keystone-listener-c9967b7f4-tjx24\" (UID: \"e1a17074-48bc-4f34-8a44-dd1321ff8fc1\") " pod="openstack/barbican-keystone-listener-c9967b7f4-tjx24" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.800460 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zpzx\" (UniqueName: \"kubernetes.io/projected/ba560005-dff7-4d93-b2aa-58d922405ff3-kube-api-access-9zpzx\") pod \"barbican-worker-6d6b94f97f-2m6zj\" (UID: \"ba560005-dff7-4d93-b2aa-58d922405ff3\") " pod="openstack/barbican-worker-6d6b94f97f-2m6zj" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.800519 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1a17074-48bc-4f34-8a44-dd1321ff8fc1-combined-ca-bundle\") pod \"barbican-keystone-listener-c9967b7f4-tjx24\" (UID: \"e1a17074-48bc-4f34-8a44-dd1321ff8fc1\") " pod="openstack/barbican-keystone-listener-c9967b7f4-tjx24" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.800552 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba560005-dff7-4d93-b2aa-58d922405ff3-logs\") pod \"barbican-worker-6d6b94f97f-2m6zj\" (UID: \"ba560005-dff7-4d93-b2aa-58d922405ff3\") " pod="openstack/barbican-worker-6d6b94f97f-2m6zj" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.800572 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1a17074-48bc-4f34-8a44-dd1321ff8fc1-config-data\") pod \"barbican-keystone-listener-c9967b7f4-tjx24\" (UID: \"e1a17074-48bc-4f34-8a44-dd1321ff8fc1\") " pod="openstack/barbican-keystone-listener-c9967b7f4-tjx24" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.800648 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.800662 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.800676 4828 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.804379 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1dd4db34-a61b-4b8b-bc81-3458dfd1491b" (UID: "1dd4db34-a61b-4b8b-bc81-3458dfd1491b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.823398 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1dd4db34-a61b-4b8b-bc81-3458dfd1491b" (UID: "1dd4db34-a61b-4b8b-bc81-3458dfd1491b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.858966 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-dlcbk"] Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.861419 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-dlcbk" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.899882 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-dlcbk"] Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.905841 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba560005-dff7-4d93-b2aa-58d922405ff3-logs\") pod \"barbican-worker-6d6b94f97f-2m6zj\" (UID: \"ba560005-dff7-4d93-b2aa-58d922405ff3\") " pod="openstack/barbican-worker-6d6b94f97f-2m6zj" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.905885 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1a17074-48bc-4f34-8a44-dd1321ff8fc1-config-data\") pod \"barbican-keystone-listener-c9967b7f4-tjx24\" (UID: \"e1a17074-48bc-4f34-8a44-dd1321ff8fc1\") " pod="openstack/barbican-keystone-listener-c9967b7f4-tjx24" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.905927 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba560005-dff7-4d93-b2aa-58d922405ff3-combined-ca-bundle\") pod \"barbican-worker-6d6b94f97f-2m6zj\" (UID: \"ba560005-dff7-4d93-b2aa-58d922405ff3\") " pod="openstack/barbican-worker-6d6b94f97f-2m6zj" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.905967 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba560005-dff7-4d93-b2aa-58d922405ff3-config-data\") pod \"barbican-worker-6d6b94f97f-2m6zj\" (UID: \"ba560005-dff7-4d93-b2aa-58d922405ff3\") " pod="openstack/barbican-worker-6d6b94f97f-2m6zj" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.906007 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msmzh\" (UniqueName: \"kubernetes.io/projected/e1a17074-48bc-4f34-8a44-dd1321ff8fc1-kube-api-access-msmzh\") pod \"barbican-keystone-listener-c9967b7f4-tjx24\" (UID: \"e1a17074-48bc-4f34-8a44-dd1321ff8fc1\") " pod="openstack/barbican-keystone-listener-c9967b7f4-tjx24" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.906037 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1a17074-48bc-4f34-8a44-dd1321ff8fc1-config-data-custom\") pod \"barbican-keystone-listener-c9967b7f4-tjx24\" (UID: \"e1a17074-48bc-4f34-8a44-dd1321ff8fc1\") " pod="openstack/barbican-keystone-listener-c9967b7f4-tjx24" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.906061 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba560005-dff7-4d93-b2aa-58d922405ff3-config-data-custom\") pod \"barbican-worker-6d6b94f97f-2m6zj\" (UID: \"ba560005-dff7-4d93-b2aa-58d922405ff3\") " pod="openstack/barbican-worker-6d6b94f97f-2m6zj" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.906089 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1a17074-48bc-4f34-8a44-dd1321ff8fc1-logs\") pod \"barbican-keystone-listener-c9967b7f4-tjx24\" (UID: \"e1a17074-48bc-4f34-8a44-dd1321ff8fc1\") " pod="openstack/barbican-keystone-listener-c9967b7f4-tjx24" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.906108 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zpzx\" (UniqueName: \"kubernetes.io/projected/ba560005-dff7-4d93-b2aa-58d922405ff3-kube-api-access-9zpzx\") pod \"barbican-worker-6d6b94f97f-2m6zj\" (UID: \"ba560005-dff7-4d93-b2aa-58d922405ff3\") " pod="openstack/barbican-worker-6d6b94f97f-2m6zj" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.906152 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1a17074-48bc-4f34-8a44-dd1321ff8fc1-combined-ca-bundle\") pod \"barbican-keystone-listener-c9967b7f4-tjx24\" (UID: \"e1a17074-48bc-4f34-8a44-dd1321ff8fc1\") " pod="openstack/barbican-keystone-listener-c9967b7f4-tjx24" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.906206 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.906226 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dd4db34-a61b-4b8b-bc81-3458dfd1491b-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.907133 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba560005-dff7-4d93-b2aa-58d922405ff3-logs\") pod \"barbican-worker-6d6b94f97f-2m6zj\" (UID: \"ba560005-dff7-4d93-b2aa-58d922405ff3\") " pod="openstack/barbican-worker-6d6b94f97f-2m6zj" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.908667 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1a17074-48bc-4f34-8a44-dd1321ff8fc1-logs\") pod \"barbican-keystone-listener-c9967b7f4-tjx24\" (UID: \"e1a17074-48bc-4f34-8a44-dd1321ff8fc1\") " pod="openstack/barbican-keystone-listener-c9967b7f4-tjx24" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.913528 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba560005-dff7-4d93-b2aa-58d922405ff3-config-data-custom\") pod \"barbican-worker-6d6b94f97f-2m6zj\" (UID: \"ba560005-dff7-4d93-b2aa-58d922405ff3\") " pod="openstack/barbican-worker-6d6b94f97f-2m6zj" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.913534 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1a17074-48bc-4f34-8a44-dd1321ff8fc1-combined-ca-bundle\") pod \"barbican-keystone-listener-c9967b7f4-tjx24\" (UID: \"e1a17074-48bc-4f34-8a44-dd1321ff8fc1\") " pod="openstack/barbican-keystone-listener-c9967b7f4-tjx24" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.916331 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba560005-dff7-4d93-b2aa-58d922405ff3-combined-ca-bundle\") pod \"barbican-worker-6d6b94f97f-2m6zj\" (UID: \"ba560005-dff7-4d93-b2aa-58d922405ff3\") " pod="openstack/barbican-worker-6d6b94f97f-2m6zj" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.926443 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba560005-dff7-4d93-b2aa-58d922405ff3-config-data\") pod \"barbican-worker-6d6b94f97f-2m6zj\" (UID: \"ba560005-dff7-4d93-b2aa-58d922405ff3\") " pod="openstack/barbican-worker-6d6b94f97f-2m6zj" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.927666 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1a17074-48bc-4f34-8a44-dd1321ff8fc1-config-data\") pod \"barbican-keystone-listener-c9967b7f4-tjx24\" (UID: \"e1a17074-48bc-4f34-8a44-dd1321ff8fc1\") " pod="openstack/barbican-keystone-listener-c9967b7f4-tjx24" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.943085 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1a17074-48bc-4f34-8a44-dd1321ff8fc1-config-data-custom\") pod \"barbican-keystone-listener-c9967b7f4-tjx24\" (UID: \"e1a17074-48bc-4f34-8a44-dd1321ff8fc1\") " pod="openstack/barbican-keystone-listener-c9967b7f4-tjx24" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.954039 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msmzh\" (UniqueName: \"kubernetes.io/projected/e1a17074-48bc-4f34-8a44-dd1321ff8fc1-kube-api-access-msmzh\") pod \"barbican-keystone-listener-c9967b7f4-tjx24\" (UID: \"e1a17074-48bc-4f34-8a44-dd1321ff8fc1\") " pod="openstack/barbican-keystone-listener-c9967b7f4-tjx24" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.963495 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7gmm7" Dec 05 19:24:54 crc kubenswrapper[4828]: I1205 19:24:54.968562 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zpzx\" (UniqueName: \"kubernetes.io/projected/ba560005-dff7-4d93-b2aa-58d922405ff3-kube-api-access-9zpzx\") pod \"barbican-worker-6d6b94f97f-2m6zj\" (UID: \"ba560005-dff7-4d93-b2aa-58d922405ff3\") " pod="openstack/barbican-worker-6d6b94f97f-2m6zj" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.009047 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-69cd8bdf78-r2wnx"] Dec 05 19:24:55 crc kubenswrapper[4828]: E1205 19:24:55.009561 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffc75dac-d7b0-41ce-ac4d-94f251036f95" containerName="cinder-db-sync" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.009575 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffc75dac-d7b0-41ce-ac4d-94f251036f95" containerName="cinder-db-sync" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.009816 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffc75dac-d7b0-41ce-ac4d-94f251036f95" containerName="cinder-db-sync" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.011067 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-69cd8bdf78-r2wnx" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.020162 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.020914 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-config\") pod \"dnsmasq-dns-85ff748b95-dlcbk\" (UID: \"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe\") " pod="openstack/dnsmasq-dns-85ff748b95-dlcbk" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.020999 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-dlcbk\" (UID: \"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe\") " pod="openstack/dnsmasq-dns-85ff748b95-dlcbk" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.021027 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-dns-svc\") pod \"dnsmasq-dns-85ff748b95-dlcbk\" (UID: \"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe\") " pod="openstack/dnsmasq-dns-85ff748b95-dlcbk" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.021110 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngdnq\" (UniqueName: \"kubernetes.io/projected/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-kube-api-access-ngdnq\") pod \"dnsmasq-dns-85ff748b95-dlcbk\" (UID: \"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe\") " pod="openstack/dnsmasq-dns-85ff748b95-dlcbk" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.021150 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-dlcbk\" (UID: \"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe\") " pod="openstack/dnsmasq-dns-85ff748b95-dlcbk" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.021207 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-dlcbk\" (UID: \"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe\") " pod="openstack/dnsmasq-dns-85ff748b95-dlcbk" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.021323 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-c9967b7f4-tjx24" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.047111 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6d6b94f97f-2m6zj" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.050886 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-69cd8bdf78-r2wnx"] Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.124403 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffc75dac-d7b0-41ce-ac4d-94f251036f95-combined-ca-bundle\") pod \"ffc75dac-d7b0-41ce-ac4d-94f251036f95\" (UID: \"ffc75dac-d7b0-41ce-ac4d-94f251036f95\") " Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.124474 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ffc75dac-d7b0-41ce-ac4d-94f251036f95-db-sync-config-data\") pod \"ffc75dac-d7b0-41ce-ac4d-94f251036f95\" (UID: \"ffc75dac-d7b0-41ce-ac4d-94f251036f95\") " Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.124544 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffc75dac-d7b0-41ce-ac4d-94f251036f95-scripts\") pod \"ffc75dac-d7b0-41ce-ac4d-94f251036f95\" (UID: \"ffc75dac-d7b0-41ce-ac4d-94f251036f95\") " Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.124583 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffc75dac-d7b0-41ce-ac4d-94f251036f95-config-data\") pod \"ffc75dac-d7b0-41ce-ac4d-94f251036f95\" (UID: \"ffc75dac-d7b0-41ce-ac4d-94f251036f95\") " Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.124662 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffc75dac-d7b0-41ce-ac4d-94f251036f95-etc-machine-id\") pod \"ffc75dac-d7b0-41ce-ac4d-94f251036f95\" (UID: \"ffc75dac-d7b0-41ce-ac4d-94f251036f95\") " Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.124691 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjw66\" (UniqueName: \"kubernetes.io/projected/ffc75dac-d7b0-41ce-ac4d-94f251036f95-kube-api-access-mjw66\") pod \"ffc75dac-d7b0-41ce-ac4d-94f251036f95\" (UID: \"ffc75dac-d7b0-41ce-ac4d-94f251036f95\") " Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.125084 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-dlcbk\" (UID: \"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe\") " pod="openstack/dnsmasq-dns-85ff748b95-dlcbk" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.125109 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-dns-svc\") pod \"dnsmasq-dns-85ff748b95-dlcbk\" (UID: \"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe\") " pod="openstack/dnsmasq-dns-85ff748b95-dlcbk" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.125201 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngdnq\" (UniqueName: \"kubernetes.io/projected/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-kube-api-access-ngdnq\") pod \"dnsmasq-dns-85ff748b95-dlcbk\" (UID: \"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe\") " pod="openstack/dnsmasq-dns-85ff748b95-dlcbk" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.125235 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-dlcbk\" (UID: \"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe\") " pod="openstack/dnsmasq-dns-85ff748b95-dlcbk" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.125270 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbf14167-7355-4185-b0c5-8258f1c43132-logs\") pod \"barbican-api-69cd8bdf78-r2wnx\" (UID: \"fbf14167-7355-4185-b0c5-8258f1c43132\") " pod="openstack/barbican-api-69cd8bdf78-r2wnx" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.125298 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d5ls\" (UniqueName: \"kubernetes.io/projected/fbf14167-7355-4185-b0c5-8258f1c43132-kube-api-access-6d5ls\") pod \"barbican-api-69cd8bdf78-r2wnx\" (UID: \"fbf14167-7355-4185-b0c5-8258f1c43132\") " pod="openstack/barbican-api-69cd8bdf78-r2wnx" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.125348 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-dlcbk\" (UID: \"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe\") " pod="openstack/dnsmasq-dns-85ff748b95-dlcbk" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.125391 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbf14167-7355-4185-b0c5-8258f1c43132-combined-ca-bundle\") pod \"barbican-api-69cd8bdf78-r2wnx\" (UID: \"fbf14167-7355-4185-b0c5-8258f1c43132\") " pod="openstack/barbican-api-69cd8bdf78-r2wnx" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.125420 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbf14167-7355-4185-b0c5-8258f1c43132-config-data-custom\") pod \"barbican-api-69cd8bdf78-r2wnx\" (UID: \"fbf14167-7355-4185-b0c5-8258f1c43132\") " pod="openstack/barbican-api-69cd8bdf78-r2wnx" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.125444 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbf14167-7355-4185-b0c5-8258f1c43132-config-data\") pod \"barbican-api-69cd8bdf78-r2wnx\" (UID: \"fbf14167-7355-4185-b0c5-8258f1c43132\") " pod="openstack/barbican-api-69cd8bdf78-r2wnx" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.125479 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-config\") pod \"dnsmasq-dns-85ff748b95-dlcbk\" (UID: \"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe\") " pod="openstack/dnsmasq-dns-85ff748b95-dlcbk" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.126447 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-config\") pod \"dnsmasq-dns-85ff748b95-dlcbk\" (UID: \"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe\") " pod="openstack/dnsmasq-dns-85ff748b95-dlcbk" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.131920 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffc75dac-d7b0-41ce-ac4d-94f251036f95-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ffc75dac-d7b0-41ce-ac4d-94f251036f95" (UID: "ffc75dac-d7b0-41ce-ac4d-94f251036f95"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.132861 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-dlcbk\" (UID: \"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe\") " pod="openstack/dnsmasq-dns-85ff748b95-dlcbk" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.140018 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffc75dac-d7b0-41ce-ac4d-94f251036f95-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ffc75dac-d7b0-41ce-ac4d-94f251036f95" (UID: "ffc75dac-d7b0-41ce-ac4d-94f251036f95"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.140743 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-dns-svc\") pod \"dnsmasq-dns-85ff748b95-dlcbk\" (UID: \"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe\") " pod="openstack/dnsmasq-dns-85ff748b95-dlcbk" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.141633 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-dlcbk\" (UID: \"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe\") " pod="openstack/dnsmasq-dns-85ff748b95-dlcbk" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.142183 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-dlcbk\" (UID: \"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe\") " pod="openstack/dnsmasq-dns-85ff748b95-dlcbk" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.158030 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffc75dac-d7b0-41ce-ac4d-94f251036f95-scripts" (OuterVolumeSpecName: "scripts") pod "ffc75dac-d7b0-41ce-ac4d-94f251036f95" (UID: "ffc75dac-d7b0-41ce-ac4d-94f251036f95"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.160346 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffc75dac-d7b0-41ce-ac4d-94f251036f95-kube-api-access-mjw66" (OuterVolumeSpecName: "kube-api-access-mjw66") pod "ffc75dac-d7b0-41ce-ac4d-94f251036f95" (UID: "ffc75dac-d7b0-41ce-ac4d-94f251036f95"). InnerVolumeSpecName "kube-api-access-mjw66". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.176487 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngdnq\" (UniqueName: \"kubernetes.io/projected/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-kube-api-access-ngdnq\") pod \"dnsmasq-dns-85ff748b95-dlcbk\" (UID: \"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe\") " pod="openstack/dnsmasq-dns-85ff748b95-dlcbk" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.212156 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-dlcbk" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.226700 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbf14167-7355-4185-b0c5-8258f1c43132-logs\") pod \"barbican-api-69cd8bdf78-r2wnx\" (UID: \"fbf14167-7355-4185-b0c5-8258f1c43132\") " pod="openstack/barbican-api-69cd8bdf78-r2wnx" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.226746 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d5ls\" (UniqueName: \"kubernetes.io/projected/fbf14167-7355-4185-b0c5-8258f1c43132-kube-api-access-6d5ls\") pod \"barbican-api-69cd8bdf78-r2wnx\" (UID: \"fbf14167-7355-4185-b0c5-8258f1c43132\") " pod="openstack/barbican-api-69cd8bdf78-r2wnx" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.226807 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbf14167-7355-4185-b0c5-8258f1c43132-combined-ca-bundle\") pod \"barbican-api-69cd8bdf78-r2wnx\" (UID: \"fbf14167-7355-4185-b0c5-8258f1c43132\") " pod="openstack/barbican-api-69cd8bdf78-r2wnx" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.226839 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbf14167-7355-4185-b0c5-8258f1c43132-config-data-custom\") pod \"barbican-api-69cd8bdf78-r2wnx\" (UID: \"fbf14167-7355-4185-b0c5-8258f1c43132\") " pod="openstack/barbican-api-69cd8bdf78-r2wnx" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.226858 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbf14167-7355-4185-b0c5-8258f1c43132-config-data\") pod \"barbican-api-69cd8bdf78-r2wnx\" (UID: \"fbf14167-7355-4185-b0c5-8258f1c43132\") " pod="openstack/barbican-api-69cd8bdf78-r2wnx" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.227002 4828 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffc75dac-d7b0-41ce-ac4d-94f251036f95-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.227013 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjw66\" (UniqueName: \"kubernetes.io/projected/ffc75dac-d7b0-41ce-ac4d-94f251036f95-kube-api-access-mjw66\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.227024 4828 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ffc75dac-d7b0-41ce-ac4d-94f251036f95-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.227032 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffc75dac-d7b0-41ce-ac4d-94f251036f95-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.228010 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbf14167-7355-4185-b0c5-8258f1c43132-logs\") pod \"barbican-api-69cd8bdf78-r2wnx\" (UID: \"fbf14167-7355-4185-b0c5-8258f1c43132\") " pod="openstack/barbican-api-69cd8bdf78-r2wnx" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.250438 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbf14167-7355-4185-b0c5-8258f1c43132-combined-ca-bundle\") pod \"barbican-api-69cd8bdf78-r2wnx\" (UID: \"fbf14167-7355-4185-b0c5-8258f1c43132\") " pod="openstack/barbican-api-69cd8bdf78-r2wnx" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.250757 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbf14167-7355-4185-b0c5-8258f1c43132-config-data\") pod \"barbican-api-69cd8bdf78-r2wnx\" (UID: \"fbf14167-7355-4185-b0c5-8258f1c43132\") " pod="openstack/barbican-api-69cd8bdf78-r2wnx" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.255258 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbf14167-7355-4185-b0c5-8258f1c43132-config-data-custom\") pod \"barbican-api-69cd8bdf78-r2wnx\" (UID: \"fbf14167-7355-4185-b0c5-8258f1c43132\") " pod="openstack/barbican-api-69cd8bdf78-r2wnx" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.261720 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffc75dac-d7b0-41ce-ac4d-94f251036f95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffc75dac-d7b0-41ce-ac4d-94f251036f95" (UID: "ffc75dac-d7b0-41ce-ac4d-94f251036f95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.268630 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d5ls\" (UniqueName: \"kubernetes.io/projected/fbf14167-7355-4185-b0c5-8258f1c43132-kube-api-access-6d5ls\") pod \"barbican-api-69cd8bdf78-r2wnx\" (UID: \"fbf14167-7355-4185-b0c5-8258f1c43132\") " pod="openstack/barbican-api-69cd8bdf78-r2wnx" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.311979 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffc75dac-d7b0-41ce-ac4d-94f251036f95-config-data" (OuterVolumeSpecName: "config-data") pod "ffc75dac-d7b0-41ce-ac4d-94f251036f95" (UID: "ffc75dac-d7b0-41ce-ac4d-94f251036f95"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.328887 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffc75dac-d7b0-41ce-ac4d-94f251036f95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.328921 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffc75dac-d7b0-41ce-ac4d-94f251036f95-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.357809 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-69cd8bdf78-r2wnx" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.522052 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72755327-9414-46f2-b3ed-d19120b5876e","Type":"ContainerStarted","Data":"3f440e8207ced2bc1ac839363a0af7b15732d256af62d7529ad5a614b5577e48"} Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.522255 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="72755327-9414-46f2-b3ed-d19120b5876e" containerName="ceilometer-notification-agent" containerID="cri-o://057563d0cf38b61d7d6dd495da778af65b865ebbbe15dd738843b002841e01ea" gracePeriod=30 Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.525632 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.525844 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="72755327-9414-46f2-b3ed-d19120b5876e" containerName="sg-core" containerID="cri-o://77de86682baa95c4d0a6a57993055952aa8c31ae12d2e89011ce41c64b84011e" gracePeriod=30 Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.525978 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="72755327-9414-46f2-b3ed-d19120b5876e" containerName="proxy-httpd" containerID="cri-o://3f440e8207ced2bc1ac839363a0af7b15732d256af62d7529ad5a614b5577e48" gracePeriod=30 Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.544808 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-khz2q" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.544971 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7gmm7" event={"ID":"ffc75dac-d7b0-41ce-ac4d-94f251036f95","Type":"ContainerDied","Data":"013eed33e30af68389bf2830a3eaece76c89104effb51774b51df0a20ede8300"} Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.545014 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="013eed33e30af68389bf2830a3eaece76c89104effb51774b51df0a20ede8300" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.545110 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7gmm7" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.612900 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-khz2q"] Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.628230 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-khz2q"] Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.766883 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.768512 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.772140 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.775231 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.775566 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.775675 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-nzb8d" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.801922 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.849931 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-c9967b7f4-tjx24"] Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.870243 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a63cd477-b66e-4a12-bec5-a5a39b37b0eb\") " pod="openstack/cinder-scheduler-0" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.870304 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a63cd477-b66e-4a12-bec5-a5a39b37b0eb\") " pod="openstack/cinder-scheduler-0" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.870360 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a63cd477-b66e-4a12-bec5-a5a39b37b0eb\") " pod="openstack/cinder-scheduler-0" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.870381 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-config-data\") pod \"cinder-scheduler-0\" (UID: \"a63cd477-b66e-4a12-bec5-a5a39b37b0eb\") " pod="openstack/cinder-scheduler-0" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.870400 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntczv\" (UniqueName: \"kubernetes.io/projected/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-kube-api-access-ntczv\") pod \"cinder-scheduler-0\" (UID: \"a63cd477-b66e-4a12-bec5-a5a39b37b0eb\") " pod="openstack/cinder-scheduler-0" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.870441 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-scripts\") pod \"cinder-scheduler-0\" (UID: \"a63cd477-b66e-4a12-bec5-a5a39b37b0eb\") " pod="openstack/cinder-scheduler-0" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.974037 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a63cd477-b66e-4a12-bec5-a5a39b37b0eb\") " pod="openstack/cinder-scheduler-0" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.974124 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a63cd477-b66e-4a12-bec5-a5a39b37b0eb\") " pod="openstack/cinder-scheduler-0" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.974168 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-config-data\") pod \"cinder-scheduler-0\" (UID: \"a63cd477-b66e-4a12-bec5-a5a39b37b0eb\") " pod="openstack/cinder-scheduler-0" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.974194 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntczv\" (UniqueName: \"kubernetes.io/projected/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-kube-api-access-ntczv\") pod \"cinder-scheduler-0\" (UID: \"a63cd477-b66e-4a12-bec5-a5a39b37b0eb\") " pod="openstack/cinder-scheduler-0" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.974192 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a63cd477-b66e-4a12-bec5-a5a39b37b0eb\") " pod="openstack/cinder-scheduler-0" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.974242 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-scripts\") pod \"cinder-scheduler-0\" (UID: \"a63cd477-b66e-4a12-bec5-a5a39b37b0eb\") " pod="openstack/cinder-scheduler-0" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.974506 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a63cd477-b66e-4a12-bec5-a5a39b37b0eb\") " pod="openstack/cinder-scheduler-0" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.981711 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a63cd477-b66e-4a12-bec5-a5a39b37b0eb\") " pod="openstack/cinder-scheduler-0" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.981849 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-config-data\") pod \"cinder-scheduler-0\" (UID: \"a63cd477-b66e-4a12-bec5-a5a39b37b0eb\") " pod="openstack/cinder-scheduler-0" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.982403 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-scripts\") pod \"cinder-scheduler-0\" (UID: \"a63cd477-b66e-4a12-bec5-a5a39b37b0eb\") " pod="openstack/cinder-scheduler-0" Dec 05 19:24:55 crc kubenswrapper[4828]: I1205 19:24:55.983482 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a63cd477-b66e-4a12-bec5-a5a39b37b0eb\") " pod="openstack/cinder-scheduler-0" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.179270 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-699b69c564-442lb" podUID="74df4612-463b-4b3c-8f2d-7dbb9494d6fe" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.218908 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntczv\" (UniqueName: \"kubernetes.io/projected/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-kube-api-access-ntczv\") pod \"cinder-scheduler-0\" (UID: \"a63cd477-b66e-4a12-bec5-a5a39b37b0eb\") " pod="openstack/cinder-scheduler-0" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.252592 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-594b9fb44-r9zh6" podUID="99c01665-feb9-49f7-a97a-b6e6d87dc991" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.289229 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-dlcbk"] Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.427939 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-dlcbk"] Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.482400 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.503556 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dd4db34-a61b-4b8b-bc81-3458dfd1491b" path="/var/lib/kubelet/pods/1dd4db34-a61b-4b8b-bc81-3458dfd1491b/volumes" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.504474 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-krcgx"] Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.524074 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.563663 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6d6b94f97f-2m6zj"] Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.591902 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-krcgx"] Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.617315 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-dlcbk" event={"ID":"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe","Type":"ContainerStarted","Data":"c4d893271ae8b77447e86fa68ab7c632177771bcc18da1688fb528920b46668d"} Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.617625 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdq7j\" (UniqueName: \"kubernetes.io/projected/9c825b05-679b-4869-846d-ba11b6cdda19-kube-api-access-zdq7j\") pod \"dnsmasq-dns-5c9776ccc5-krcgx\" (UID: \"9c825b05-679b-4869-846d-ba11b6cdda19\") " pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.617670 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c825b05-679b-4869-846d-ba11b6cdda19-config\") pod \"dnsmasq-dns-5c9776ccc5-krcgx\" (UID: \"9c825b05-679b-4869-846d-ba11b6cdda19\") " pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.617705 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c825b05-679b-4869-846d-ba11b6cdda19-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-krcgx\" (UID: \"9c825b05-679b-4869-846d-ba11b6cdda19\") " pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.617790 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c825b05-679b-4869-846d-ba11b6cdda19-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-krcgx\" (UID: \"9c825b05-679b-4869-846d-ba11b6cdda19\") " pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.617840 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c825b05-679b-4869-846d-ba11b6cdda19-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-krcgx\" (UID: \"9c825b05-679b-4869-846d-ba11b6cdda19\") " pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.617899 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c825b05-679b-4869-846d-ba11b6cdda19-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-krcgx\" (UID: \"9c825b05-679b-4869-846d-ba11b6cdda19\") " pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.620318 4828 generic.go:334] "Generic (PLEG): container finished" podID="72755327-9414-46f2-b3ed-d19120b5876e" containerID="3f440e8207ced2bc1ac839363a0af7b15732d256af62d7529ad5a614b5577e48" exitCode=0 Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.620351 4828 generic.go:334] "Generic (PLEG): container finished" podID="72755327-9414-46f2-b3ed-d19120b5876e" containerID="77de86682baa95c4d0a6a57993055952aa8c31ae12d2e89011ce41c64b84011e" exitCode=2 Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.620396 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72755327-9414-46f2-b3ed-d19120b5876e","Type":"ContainerDied","Data":"3f440e8207ced2bc1ac839363a0af7b15732d256af62d7529ad5a614b5577e48"} Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.620424 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72755327-9414-46f2-b3ed-d19120b5876e","Type":"ContainerDied","Data":"77de86682baa95c4d0a6a57993055952aa8c31ae12d2e89011ce41c64b84011e"} Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.627310 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.629072 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.632554 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.632608 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6d6b94f97f-2m6zj" event={"ID":"ba560005-dff7-4d93-b2aa-58d922405ff3","Type":"ContainerStarted","Data":"cf8d7c29139981a36662bd806b6869569ccbaef4c3bced971481d1f0d62ef285"} Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.640315 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.644416 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c9967b7f4-tjx24" event={"ID":"e1a17074-48bc-4f34-8a44-dd1321ff8fc1","Type":"ContainerStarted","Data":"db1c39a8d36c851e7db86bbe0e771eaebccf9b0ccbe9b0758a8f8ffc396c5688"} Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.720165 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\") " pod="openstack/cinder-api-0" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.720222 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c825b05-679b-4869-846d-ba11b6cdda19-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-krcgx\" (UID: \"9c825b05-679b-4869-846d-ba11b6cdda19\") " pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.720249 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c825b05-679b-4869-846d-ba11b6cdda19-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-krcgx\" (UID: \"9c825b05-679b-4869-846d-ba11b6cdda19\") " pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.720301 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c825b05-679b-4869-846d-ba11b6cdda19-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-krcgx\" (UID: \"9c825b05-679b-4869-846d-ba11b6cdda19\") " pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.720326 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md6cb\" (UniqueName: \"kubernetes.io/projected/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-kube-api-access-md6cb\") pod \"cinder-api-0\" (UID: \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\") " pod="openstack/cinder-api-0" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.720406 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\") " pod="openstack/cinder-api-0" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.720429 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-scripts\") pod \"cinder-api-0\" (UID: \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\") " pod="openstack/cinder-api-0" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.720458 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-logs\") pod \"cinder-api-0\" (UID: \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\") " pod="openstack/cinder-api-0" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.720493 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdq7j\" (UniqueName: \"kubernetes.io/projected/9c825b05-679b-4869-846d-ba11b6cdda19-kube-api-access-zdq7j\") pod \"dnsmasq-dns-5c9776ccc5-krcgx\" (UID: \"9c825b05-679b-4869-846d-ba11b6cdda19\") " pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.720511 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c825b05-679b-4869-846d-ba11b6cdda19-config\") pod \"dnsmasq-dns-5c9776ccc5-krcgx\" (UID: \"9c825b05-679b-4869-846d-ba11b6cdda19\") " pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.720534 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c825b05-679b-4869-846d-ba11b6cdda19-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-krcgx\" (UID: \"9c825b05-679b-4869-846d-ba11b6cdda19\") " pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.720601 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-config-data-custom\") pod \"cinder-api-0\" (UID: \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\") " pod="openstack/cinder-api-0" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.720634 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-config-data\") pod \"cinder-api-0\" (UID: \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\") " pod="openstack/cinder-api-0" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.721548 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c825b05-679b-4869-846d-ba11b6cdda19-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-krcgx\" (UID: \"9c825b05-679b-4869-846d-ba11b6cdda19\") " pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.722277 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c825b05-679b-4869-846d-ba11b6cdda19-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-krcgx\" (UID: \"9c825b05-679b-4869-846d-ba11b6cdda19\") " pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.722929 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c825b05-679b-4869-846d-ba11b6cdda19-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-krcgx\" (UID: \"9c825b05-679b-4869-846d-ba11b6cdda19\") " pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.724590 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c825b05-679b-4869-846d-ba11b6cdda19-config\") pod \"dnsmasq-dns-5c9776ccc5-krcgx\" (UID: \"9c825b05-679b-4869-846d-ba11b6cdda19\") " pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.725114 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c825b05-679b-4869-846d-ba11b6cdda19-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-krcgx\" (UID: \"9c825b05-679b-4869-846d-ba11b6cdda19\") " pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.776751 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdq7j\" (UniqueName: \"kubernetes.io/projected/9c825b05-679b-4869-846d-ba11b6cdda19-kube-api-access-zdq7j\") pod \"dnsmasq-dns-5c9776ccc5-krcgx\" (UID: \"9c825b05-679b-4869-846d-ba11b6cdda19\") " pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.822471 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-config-data-custom\") pod \"cinder-api-0\" (UID: \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\") " pod="openstack/cinder-api-0" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.822519 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-config-data\") pod \"cinder-api-0\" (UID: \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\") " pod="openstack/cinder-api-0" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.822555 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\") " pod="openstack/cinder-api-0" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.822607 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md6cb\" (UniqueName: \"kubernetes.io/projected/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-kube-api-access-md6cb\") pod \"cinder-api-0\" (UID: \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\") " pod="openstack/cinder-api-0" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.822654 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\") " pod="openstack/cinder-api-0" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.822676 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-scripts\") pod \"cinder-api-0\" (UID: \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\") " pod="openstack/cinder-api-0" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.822702 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-logs\") pod \"cinder-api-0\" (UID: \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\") " pod="openstack/cinder-api-0" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.823087 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-logs\") pod \"cinder-api-0\" (UID: \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\") " pod="openstack/cinder-api-0" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.824951 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\") " pod="openstack/cinder-api-0" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.833512 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-config-data\") pod \"cinder-api-0\" (UID: \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\") " pod="openstack/cinder-api-0" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.834023 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-config-data-custom\") pod \"cinder-api-0\" (UID: \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\") " pod="openstack/cinder-api-0" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.844969 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-scripts\") pod \"cinder-api-0\" (UID: \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\") " pod="openstack/cinder-api-0" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.845262 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\") " pod="openstack/cinder-api-0" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.850932 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md6cb\" (UniqueName: \"kubernetes.io/projected/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-kube-api-access-md6cb\") pod \"cinder-api-0\" (UID: \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\") " pod="openstack/cinder-api-0" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.866358 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.901591 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-69cd8bdf78-r2wnx"] Dec 05 19:24:56 crc kubenswrapper[4828]: I1205 19:24:56.964315 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.104931 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 19:24:57 crc kubenswrapper[4828]: W1205 19:24:57.130279 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda63cd477_b66e_4a12_bec5_a5a39b37b0eb.slice/crio-c876678ec513e63435a7f7f7c6d39ad5520c25d24cb44ce56f58ece53e54ae45 WatchSource:0}: Error finding container c876678ec513e63435a7f7f7c6d39ad5520c25d24cb44ce56f58ece53e54ae45: Status 404 returned error can't find the container with id c876678ec513e63435a7f7f7c6d39ad5520c25d24cb44ce56f58ece53e54ae45 Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.551067 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.622399 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.630685 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-krcgx"] Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.647593 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw44h\" (UniqueName: \"kubernetes.io/projected/72755327-9414-46f2-b3ed-d19120b5876e-kube-api-access-kw44h\") pod \"72755327-9414-46f2-b3ed-d19120b5876e\" (UID: \"72755327-9414-46f2-b3ed-d19120b5876e\") " Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.647762 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72755327-9414-46f2-b3ed-d19120b5876e-scripts\") pod \"72755327-9414-46f2-b3ed-d19120b5876e\" (UID: \"72755327-9414-46f2-b3ed-d19120b5876e\") " Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.647882 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72755327-9414-46f2-b3ed-d19120b5876e-sg-core-conf-yaml\") pod \"72755327-9414-46f2-b3ed-d19120b5876e\" (UID: \"72755327-9414-46f2-b3ed-d19120b5876e\") " Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.647947 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72755327-9414-46f2-b3ed-d19120b5876e-run-httpd\") pod \"72755327-9414-46f2-b3ed-d19120b5876e\" (UID: \"72755327-9414-46f2-b3ed-d19120b5876e\") " Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.648016 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72755327-9414-46f2-b3ed-d19120b5876e-combined-ca-bundle\") pod \"72755327-9414-46f2-b3ed-d19120b5876e\" (UID: \"72755327-9414-46f2-b3ed-d19120b5876e\") " Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.648068 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72755327-9414-46f2-b3ed-d19120b5876e-log-httpd\") pod \"72755327-9414-46f2-b3ed-d19120b5876e\" (UID: \"72755327-9414-46f2-b3ed-d19120b5876e\") " Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.648087 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72755327-9414-46f2-b3ed-d19120b5876e-config-data\") pod \"72755327-9414-46f2-b3ed-d19120b5876e\" (UID: \"72755327-9414-46f2-b3ed-d19120b5876e\") " Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.650121 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72755327-9414-46f2-b3ed-d19120b5876e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "72755327-9414-46f2-b3ed-d19120b5876e" (UID: "72755327-9414-46f2-b3ed-d19120b5876e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.650261 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72755327-9414-46f2-b3ed-d19120b5876e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "72755327-9414-46f2-b3ed-d19120b5876e" (UID: "72755327-9414-46f2-b3ed-d19120b5876e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.652583 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72755327-9414-46f2-b3ed-d19120b5876e-scripts" (OuterVolumeSpecName: "scripts") pod "72755327-9414-46f2-b3ed-d19120b5876e" (UID: "72755327-9414-46f2-b3ed-d19120b5876e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.653883 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72755327-9414-46f2-b3ed-d19120b5876e-kube-api-access-kw44h" (OuterVolumeSpecName: "kube-api-access-kw44h") pod "72755327-9414-46f2-b3ed-d19120b5876e" (UID: "72755327-9414-46f2-b3ed-d19120b5876e"). InnerVolumeSpecName "kube-api-access-kw44h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.657953 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" event={"ID":"9c825b05-679b-4869-846d-ba11b6cdda19","Type":"ContainerStarted","Data":"44484adf8e9af3fdc4e890e5599750f59f5c11949c3938c1d2e5c63805eab150"} Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.680459 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69cd8bdf78-r2wnx" event={"ID":"fbf14167-7355-4185-b0c5-8258f1c43132","Type":"ContainerStarted","Data":"e8fa3f5a40f1a00fea8320b7c16eff1e244f0ea27a018237d1f44c20da6ed6d7"} Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.680503 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69cd8bdf78-r2wnx" event={"ID":"fbf14167-7355-4185-b0c5-8258f1c43132","Type":"ContainerStarted","Data":"c1ccc96a183ec3c2119ee16c420d1b5faad97f1a573b5879a157bc3f55e86fd0"} Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.680513 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69cd8bdf78-r2wnx" event={"ID":"fbf14167-7355-4185-b0c5-8258f1c43132","Type":"ContainerStarted","Data":"7da24fe1a2bb54255f82eb17e5bd50b40858b87e1880cefdb534218700fd9ccb"} Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.681620 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-69cd8bdf78-r2wnx" Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.681657 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-69cd8bdf78-r2wnx" Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.702588 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29","Type":"ContainerStarted","Data":"dd9a90d9f121602a9d8949184a06f697b244ea949b447a807aeee7725eac97db"} Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.725264 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72755327-9414-46f2-b3ed-d19120b5876e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "72755327-9414-46f2-b3ed-d19120b5876e" (UID: "72755327-9414-46f2-b3ed-d19120b5876e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.731780 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a63cd477-b66e-4a12-bec5-a5a39b37b0eb","Type":"ContainerStarted","Data":"c876678ec513e63435a7f7f7c6d39ad5520c25d24cb44ce56f58ece53e54ae45"} Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.734816 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-69cd8bdf78-r2wnx" podStartSLOduration=3.734790007 podStartE2EDuration="3.734790007s" podCreationTimestamp="2025-12-05 19:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:24:57.723261314 +0000 UTC m=+1275.618483620" watchObservedRunningTime="2025-12-05 19:24:57.734790007 +0000 UTC m=+1275.630012313" Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.746180 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72755327-9414-46f2-b3ed-d19120b5876e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72755327-9414-46f2-b3ed-d19120b5876e" (UID: "72755327-9414-46f2-b3ed-d19120b5876e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.746358 4828 generic.go:334] "Generic (PLEG): container finished" podID="0d9959bb-415b-41e6-ad9d-a2bd2fcedafe" containerID="d9383e49080302f87b80d593ed36c58cdb92fefd33ed3aaeb9e161610f1167bf" exitCode=0 Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.746420 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-dlcbk" event={"ID":"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe","Type":"ContainerDied","Data":"d9383e49080302f87b80d593ed36c58cdb92fefd33ed3aaeb9e161610f1167bf"} Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.750331 4828 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72755327-9414-46f2-b3ed-d19120b5876e-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.750362 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72755327-9414-46f2-b3ed-d19120b5876e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.750376 4828 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72755327-9414-46f2-b3ed-d19120b5876e-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.750399 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw44h\" (UniqueName: \"kubernetes.io/projected/72755327-9414-46f2-b3ed-d19120b5876e-kube-api-access-kw44h\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.750412 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72755327-9414-46f2-b3ed-d19120b5876e-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.750423 4828 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72755327-9414-46f2-b3ed-d19120b5876e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.772208 4828 generic.go:334] "Generic (PLEG): container finished" podID="72755327-9414-46f2-b3ed-d19120b5876e" containerID="057563d0cf38b61d7d6dd495da778af65b865ebbbe15dd738843b002841e01ea" exitCode=0 Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.772253 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72755327-9414-46f2-b3ed-d19120b5876e","Type":"ContainerDied","Data":"057563d0cf38b61d7d6dd495da778af65b865ebbbe15dd738843b002841e01ea"} Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.772278 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72755327-9414-46f2-b3ed-d19120b5876e","Type":"ContainerDied","Data":"7d2b440a6f49f3b62a263944e77046573dd0f17b4046655c22777c4fa5b43bc0"} Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.772296 4828 scope.go:117] "RemoveContainer" containerID="3f440e8207ced2bc1ac839363a0af7b15732d256af62d7529ad5a614b5577e48" Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.772426 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.820275 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72755327-9414-46f2-b3ed-d19120b5876e-config-data" (OuterVolumeSpecName: "config-data") pod "72755327-9414-46f2-b3ed-d19120b5876e" (UID: "72755327-9414-46f2-b3ed-d19120b5876e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.844623 4828 scope.go:117] "RemoveContainer" containerID="77de86682baa95c4d0a6a57993055952aa8c31ae12d2e89011ce41c64b84011e" Dec 05 19:24:57 crc kubenswrapper[4828]: I1205 19:24:57.854022 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72755327-9414-46f2-b3ed-d19120b5876e-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.184986 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.185379 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.192780 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:24:58 crc kubenswrapper[4828]: E1205 19:24:58.193564 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72755327-9414-46f2-b3ed-d19120b5876e" containerName="sg-core" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.193582 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="72755327-9414-46f2-b3ed-d19120b5876e" containerName="sg-core" Dec 05 19:24:58 crc kubenswrapper[4828]: E1205 19:24:58.193614 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72755327-9414-46f2-b3ed-d19120b5876e" containerName="ceilometer-notification-agent" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.193621 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="72755327-9414-46f2-b3ed-d19120b5876e" containerName="ceilometer-notification-agent" Dec 05 19:24:58 crc kubenswrapper[4828]: E1205 19:24:58.193720 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72755327-9414-46f2-b3ed-d19120b5876e" containerName="proxy-httpd" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.193728 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="72755327-9414-46f2-b3ed-d19120b5876e" containerName="proxy-httpd" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.193927 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="72755327-9414-46f2-b3ed-d19120b5876e" containerName="sg-core" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.193961 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="72755327-9414-46f2-b3ed-d19120b5876e" containerName="ceilometer-notification-agent" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.193978 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="72755327-9414-46f2-b3ed-d19120b5876e" containerName="proxy-httpd" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.195899 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.199661 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.199921 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.205180 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.271709 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d665a74-59df-4fd9-924b-c082280c3f13-run-httpd\") pod \"ceilometer-0\" (UID: \"5d665a74-59df-4fd9-924b-c082280c3f13\") " pod="openstack/ceilometer-0" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.271981 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d665a74-59df-4fd9-924b-c082280c3f13-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5d665a74-59df-4fd9-924b-c082280c3f13\") " pod="openstack/ceilometer-0" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.272085 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d665a74-59df-4fd9-924b-c082280c3f13-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5d665a74-59df-4fd9-924b-c082280c3f13\") " pod="openstack/ceilometer-0" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.272191 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d665a74-59df-4fd9-924b-c082280c3f13-config-data\") pod \"ceilometer-0\" (UID: \"5d665a74-59df-4fd9-924b-c082280c3f13\") " pod="openstack/ceilometer-0" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.272258 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d665a74-59df-4fd9-924b-c082280c3f13-log-httpd\") pod \"ceilometer-0\" (UID: \"5d665a74-59df-4fd9-924b-c082280c3f13\") " pod="openstack/ceilometer-0" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.272310 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk26f\" (UniqueName: \"kubernetes.io/projected/5d665a74-59df-4fd9-924b-c082280c3f13-kube-api-access-vk26f\") pod \"ceilometer-0\" (UID: \"5d665a74-59df-4fd9-924b-c082280c3f13\") " pod="openstack/ceilometer-0" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.272347 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d665a74-59df-4fd9-924b-c082280c3f13-scripts\") pod \"ceilometer-0\" (UID: \"5d665a74-59df-4fd9-924b-c082280c3f13\") " pod="openstack/ceilometer-0" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.374764 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d665a74-59df-4fd9-924b-c082280c3f13-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5d665a74-59df-4fd9-924b-c082280c3f13\") " pod="openstack/ceilometer-0" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.375068 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d665a74-59df-4fd9-924b-c082280c3f13-config-data\") pod \"ceilometer-0\" (UID: \"5d665a74-59df-4fd9-924b-c082280c3f13\") " pod="openstack/ceilometer-0" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.375098 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d665a74-59df-4fd9-924b-c082280c3f13-log-httpd\") pod \"ceilometer-0\" (UID: \"5d665a74-59df-4fd9-924b-c082280c3f13\") " pod="openstack/ceilometer-0" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.375121 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk26f\" (UniqueName: \"kubernetes.io/projected/5d665a74-59df-4fd9-924b-c082280c3f13-kube-api-access-vk26f\") pod \"ceilometer-0\" (UID: \"5d665a74-59df-4fd9-924b-c082280c3f13\") " pod="openstack/ceilometer-0" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.375151 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d665a74-59df-4fd9-924b-c082280c3f13-scripts\") pod \"ceilometer-0\" (UID: \"5d665a74-59df-4fd9-924b-c082280c3f13\") " pod="openstack/ceilometer-0" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.375201 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d665a74-59df-4fd9-924b-c082280c3f13-run-httpd\") pod \"ceilometer-0\" (UID: \"5d665a74-59df-4fd9-924b-c082280c3f13\") " pod="openstack/ceilometer-0" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.375222 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d665a74-59df-4fd9-924b-c082280c3f13-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5d665a74-59df-4fd9-924b-c082280c3f13\") " pod="openstack/ceilometer-0" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.376407 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d665a74-59df-4fd9-924b-c082280c3f13-log-httpd\") pod \"ceilometer-0\" (UID: \"5d665a74-59df-4fd9-924b-c082280c3f13\") " pod="openstack/ceilometer-0" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.380236 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d665a74-59df-4fd9-924b-c082280c3f13-run-httpd\") pod \"ceilometer-0\" (UID: \"5d665a74-59df-4fd9-924b-c082280c3f13\") " pod="openstack/ceilometer-0" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.381755 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d665a74-59df-4fd9-924b-c082280c3f13-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5d665a74-59df-4fd9-924b-c082280c3f13\") " pod="openstack/ceilometer-0" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.385429 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d665a74-59df-4fd9-924b-c082280c3f13-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5d665a74-59df-4fd9-924b-c082280c3f13\") " pod="openstack/ceilometer-0" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.386076 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d665a74-59df-4fd9-924b-c082280c3f13-config-data\") pod \"ceilometer-0\" (UID: \"5d665a74-59df-4fd9-924b-c082280c3f13\") " pod="openstack/ceilometer-0" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.391627 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d665a74-59df-4fd9-924b-c082280c3f13-scripts\") pod \"ceilometer-0\" (UID: \"5d665a74-59df-4fd9-924b-c082280c3f13\") " pod="openstack/ceilometer-0" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.407577 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk26f\" (UniqueName: \"kubernetes.io/projected/5d665a74-59df-4fd9-924b-c082280c3f13-kube-api-access-vk26f\") pod \"ceilometer-0\" (UID: \"5d665a74-59df-4fd9-924b-c082280c3f13\") " pod="openstack/ceilometer-0" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.458360 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72755327-9414-46f2-b3ed-d19120b5876e" path="/var/lib/kubelet/pods/72755327-9414-46f2-b3ed-d19120b5876e/volumes" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.560546 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.784107 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-dlcbk" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.796084 4828 generic.go:334] "Generic (PLEG): container finished" podID="9c825b05-679b-4869-846d-ba11b6cdda19" containerID="66e3306a5c0fa739135c1249813ea7469570227f9e3befa3b444f3b88ea6cadb" exitCode=0 Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.796161 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" event={"ID":"9c825b05-679b-4869-846d-ba11b6cdda19","Type":"ContainerDied","Data":"66e3306a5c0fa739135c1249813ea7469570227f9e3befa3b444f3b88ea6cadb"} Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.804770 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29","Type":"ContainerStarted","Data":"7fe7e19f57c44a4eade1fdfa82b09e6b3af0bd5b7dc98435c57443c6be14e19b"} Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.806705 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-dlcbk" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.806689 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-dlcbk" event={"ID":"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe","Type":"ContainerDied","Data":"c4d893271ae8b77447e86fa68ab7c632177771bcc18da1688fb528920b46668d"} Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.881778 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-dns-swift-storage-0\") pod \"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe\" (UID: \"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe\") " Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.881843 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-ovsdbserver-nb\") pod \"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe\" (UID: \"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe\") " Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.882158 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-config\") pod \"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe\" (UID: \"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe\") " Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.882459 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngdnq\" (UniqueName: \"kubernetes.io/projected/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-kube-api-access-ngdnq\") pod \"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe\" (UID: \"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe\") " Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.882501 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-ovsdbserver-sb\") pod \"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe\" (UID: \"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe\") " Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.883188 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-dns-svc\") pod \"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe\" (UID: \"0d9959bb-415b-41e6-ad9d-a2bd2fcedafe\") " Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.887187 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-kube-api-access-ngdnq" (OuterVolumeSpecName: "kube-api-access-ngdnq") pod "0d9959bb-415b-41e6-ad9d-a2bd2fcedafe" (UID: "0d9959bb-415b-41e6-ad9d-a2bd2fcedafe"). InnerVolumeSpecName "kube-api-access-ngdnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.904141 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0d9959bb-415b-41e6-ad9d-a2bd2fcedafe" (UID: "0d9959bb-415b-41e6-ad9d-a2bd2fcedafe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.907464 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-config" (OuterVolumeSpecName: "config") pod "0d9959bb-415b-41e6-ad9d-a2bd2fcedafe" (UID: "0d9959bb-415b-41e6-ad9d-a2bd2fcedafe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.909541 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0d9959bb-415b-41e6-ad9d-a2bd2fcedafe" (UID: "0d9959bb-415b-41e6-ad9d-a2bd2fcedafe"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.910040 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0d9959bb-415b-41e6-ad9d-a2bd2fcedafe" (UID: "0d9959bb-415b-41e6-ad9d-a2bd2fcedafe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.911098 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0d9959bb-415b-41e6-ad9d-a2bd2fcedafe" (UID: "0d9959bb-415b-41e6-ad9d-a2bd2fcedafe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.985241 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.985276 4828 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.985289 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.985299 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.985308 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngdnq\" (UniqueName: \"kubernetes.io/projected/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-kube-api-access-ngdnq\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:58 crc kubenswrapper[4828]: I1205 19:24:58.985316 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 19:24:59 crc kubenswrapper[4828]: I1205 19:24:59.005264 4828 scope.go:117] "RemoveContainer" containerID="057563d0cf38b61d7d6dd495da778af65b865ebbbe15dd738843b002841e01ea" Dec 05 19:24:59 crc kubenswrapper[4828]: I1205 19:24:59.074745 4828 scope.go:117] "RemoveContainer" containerID="3f440e8207ced2bc1ac839363a0af7b15732d256af62d7529ad5a614b5577e48" Dec 05 19:24:59 crc kubenswrapper[4828]: E1205 19:24:59.075120 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f440e8207ced2bc1ac839363a0af7b15732d256af62d7529ad5a614b5577e48\": container with ID starting with 3f440e8207ced2bc1ac839363a0af7b15732d256af62d7529ad5a614b5577e48 not found: ID does not exist" containerID="3f440e8207ced2bc1ac839363a0af7b15732d256af62d7529ad5a614b5577e48" Dec 05 19:24:59 crc kubenswrapper[4828]: I1205 19:24:59.075155 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f440e8207ced2bc1ac839363a0af7b15732d256af62d7529ad5a614b5577e48"} err="failed to get container status \"3f440e8207ced2bc1ac839363a0af7b15732d256af62d7529ad5a614b5577e48\": rpc error: code = NotFound desc = could not find container \"3f440e8207ced2bc1ac839363a0af7b15732d256af62d7529ad5a614b5577e48\": container with ID starting with 3f440e8207ced2bc1ac839363a0af7b15732d256af62d7529ad5a614b5577e48 not found: ID does not exist" Dec 05 19:24:59 crc kubenswrapper[4828]: I1205 19:24:59.075183 4828 scope.go:117] "RemoveContainer" containerID="77de86682baa95c4d0a6a57993055952aa8c31ae12d2e89011ce41c64b84011e" Dec 05 19:24:59 crc kubenswrapper[4828]: E1205 19:24:59.075719 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77de86682baa95c4d0a6a57993055952aa8c31ae12d2e89011ce41c64b84011e\": container with ID starting with 77de86682baa95c4d0a6a57993055952aa8c31ae12d2e89011ce41c64b84011e not found: ID does not exist" containerID="77de86682baa95c4d0a6a57993055952aa8c31ae12d2e89011ce41c64b84011e" Dec 05 19:24:59 crc kubenswrapper[4828]: I1205 19:24:59.075768 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77de86682baa95c4d0a6a57993055952aa8c31ae12d2e89011ce41c64b84011e"} err="failed to get container status \"77de86682baa95c4d0a6a57993055952aa8c31ae12d2e89011ce41c64b84011e\": rpc error: code = NotFound desc = could not find container \"77de86682baa95c4d0a6a57993055952aa8c31ae12d2e89011ce41c64b84011e\": container with ID starting with 77de86682baa95c4d0a6a57993055952aa8c31ae12d2e89011ce41c64b84011e not found: ID does not exist" Dec 05 19:24:59 crc kubenswrapper[4828]: I1205 19:24:59.075803 4828 scope.go:117] "RemoveContainer" containerID="057563d0cf38b61d7d6dd495da778af65b865ebbbe15dd738843b002841e01ea" Dec 05 19:24:59 crc kubenswrapper[4828]: E1205 19:24:59.076166 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"057563d0cf38b61d7d6dd495da778af65b865ebbbe15dd738843b002841e01ea\": container with ID starting with 057563d0cf38b61d7d6dd495da778af65b865ebbbe15dd738843b002841e01ea not found: ID does not exist" containerID="057563d0cf38b61d7d6dd495da778af65b865ebbbe15dd738843b002841e01ea" Dec 05 19:24:59 crc kubenswrapper[4828]: I1205 19:24:59.076192 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"057563d0cf38b61d7d6dd495da778af65b865ebbbe15dd738843b002841e01ea"} err="failed to get container status \"057563d0cf38b61d7d6dd495da778af65b865ebbbe15dd738843b002841e01ea\": rpc error: code = NotFound desc = could not find container \"057563d0cf38b61d7d6dd495da778af65b865ebbbe15dd738843b002841e01ea\": container with ID starting with 057563d0cf38b61d7d6dd495da778af65b865ebbbe15dd738843b002841e01ea not found: ID does not exist" Dec 05 19:24:59 crc kubenswrapper[4828]: I1205 19:24:59.076206 4828 scope.go:117] "RemoveContainer" containerID="d9383e49080302f87b80d593ed36c58cdb92fefd33ed3aaeb9e161610f1167bf" Dec 05 19:24:59 crc kubenswrapper[4828]: I1205 19:24:59.170400 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-dlcbk"] Dec 05 19:24:59 crc kubenswrapper[4828]: I1205 19:24:59.195543 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-dlcbk"] Dec 05 19:24:59 crc kubenswrapper[4828]: I1205 19:24:59.383505 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 05 19:24:59 crc kubenswrapper[4828]: I1205 19:24:59.428610 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:24:59 crc kubenswrapper[4828]: I1205 19:24:59.826182 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" event={"ID":"9c825b05-679b-4869-846d-ba11b6cdda19","Type":"ContainerStarted","Data":"3a223c5d417b42dc622810b8d6b0f0ec6b1af5d0265fc9aa843fa9c5c8e2c8d4"} Dec 05 19:24:59 crc kubenswrapper[4828]: I1205 19:24:59.826627 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" Dec 05 19:24:59 crc kubenswrapper[4828]: I1205 19:24:59.830346 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c9967b7f4-tjx24" event={"ID":"e1a17074-48bc-4f34-8a44-dd1321ff8fc1","Type":"ContainerStarted","Data":"2548d53ccb66cfe63c5edfb644b181245140ee1a96ff2da40c188722fc807efa"} Dec 05 19:24:59 crc kubenswrapper[4828]: I1205 19:24:59.834876 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d665a74-59df-4fd9-924b-c082280c3f13","Type":"ContainerStarted","Data":"37f7456c67746ede925b5cd1e1a6ceb11e69fbd544a79de25d32d231cd43d6fa"} Dec 05 19:24:59 crc kubenswrapper[4828]: I1205 19:24:59.854306 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6d6b94f97f-2m6zj" event={"ID":"ba560005-dff7-4d93-b2aa-58d922405ff3","Type":"ContainerStarted","Data":"7cff846b1ad02eec7801b45487dd64aba868f64a2cfc0b63f58610d40151540d"} Dec 05 19:24:59 crc kubenswrapper[4828]: I1205 19:24:59.856072 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" podStartSLOduration=3.856052158 podStartE2EDuration="3.856052158s" podCreationTimestamp="2025-12-05 19:24:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:24:59.845399209 +0000 UTC m=+1277.740621515" watchObservedRunningTime="2025-12-05 19:24:59.856052158 +0000 UTC m=+1277.751274464" Dec 05 19:25:00 crc kubenswrapper[4828]: I1205 19:25:00.463303 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d9959bb-415b-41e6-ad9d-a2bd2fcedafe" path="/var/lib/kubelet/pods/0d9959bb-415b-41e6-ad9d-a2bd2fcedafe/volumes" Dec 05 19:25:00 crc kubenswrapper[4828]: I1205 19:25:00.878158 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c9967b7f4-tjx24" event={"ID":"e1a17074-48bc-4f34-8a44-dd1321ff8fc1","Type":"ContainerStarted","Data":"2c0f9dbb6f602cd73cae5caddceb25b9d53b90da5f856edf6a14dc22982148ff"} Dec 05 19:25:00 crc kubenswrapper[4828]: I1205 19:25:00.883007 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29","Type":"ContainerStarted","Data":"fa84a67505dfbebf9ec77eee992291ea6479519439669bcd2afd7d270142b2a4"} Dec 05 19:25:00 crc kubenswrapper[4828]: I1205 19:25:00.883162 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29" containerName="cinder-api-log" containerID="cri-o://7fe7e19f57c44a4eade1fdfa82b09e6b3af0bd5b7dc98435c57443c6be14e19b" gracePeriod=30 Dec 05 19:25:00 crc kubenswrapper[4828]: I1205 19:25:00.883356 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 05 19:25:00 crc kubenswrapper[4828]: I1205 19:25:00.883389 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29" containerName="cinder-api" containerID="cri-o://fa84a67505dfbebf9ec77eee992291ea6479519439669bcd2afd7d270142b2a4" gracePeriod=30 Dec 05 19:25:00 crc kubenswrapper[4828]: I1205 19:25:00.904182 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d665a74-59df-4fd9-924b-c082280c3f13","Type":"ContainerStarted","Data":"851363f5f50f0d558604f1626f43f5cc900dd0bfd2649b3c5b52dcb0d8593f9b"} Dec 05 19:25:00 crc kubenswrapper[4828]: I1205 19:25:00.909944 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a63cd477-b66e-4a12-bec5-a5a39b37b0eb","Type":"ContainerStarted","Data":"75bad46a06073b85a2e5713d562066b478f6aa039726f3643c2b9c1e9b151f84"} Dec 05 19:25:00 crc kubenswrapper[4828]: I1205 19:25:00.909985 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a63cd477-b66e-4a12-bec5-a5a39b37b0eb","Type":"ContainerStarted","Data":"ffba588f50d123a22a4afb599fe96b324a8ae164d939dbf470876d79316010b4"} Dec 05 19:25:00 crc kubenswrapper[4828]: I1205 19:25:00.919980 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6d6b94f97f-2m6zj" event={"ID":"ba560005-dff7-4d93-b2aa-58d922405ff3","Type":"ContainerStarted","Data":"a17cfb81473b9a34e3d204e282740cdfdf027bbeddfb374e7482985cfa2f81f9"} Dec 05 19:25:00 crc kubenswrapper[4828]: I1205 19:25:00.920037 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-c9967b7f4-tjx24" podStartSLOduration=3.727282224 podStartE2EDuration="6.920017899s" podCreationTimestamp="2025-12-05 19:24:54 +0000 UTC" firstStartedPulling="2025-12-05 19:24:55.881468613 +0000 UTC m=+1273.776690919" lastFinishedPulling="2025-12-05 19:24:59.074204288 +0000 UTC m=+1276.969426594" observedRunningTime="2025-12-05 19:25:00.905182636 +0000 UTC m=+1278.800404952" watchObservedRunningTime="2025-12-05 19:25:00.920017899 +0000 UTC m=+1278.815240205" Dec 05 19:25:00 crc kubenswrapper[4828]: I1205 19:25:00.929088 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.929067333 podStartE2EDuration="4.929067333s" podCreationTimestamp="2025-12-05 19:24:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:25:00.925953129 +0000 UTC m=+1278.821175455" watchObservedRunningTime="2025-12-05 19:25:00.929067333 +0000 UTC m=+1278.824289639" Dec 05 19:25:00 crc kubenswrapper[4828]: I1205 19:25:00.951575 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.016591797 podStartE2EDuration="5.951557122s" podCreationTimestamp="2025-12-05 19:24:55 +0000 UTC" firstStartedPulling="2025-12-05 19:24:57.138965416 +0000 UTC m=+1275.034187722" lastFinishedPulling="2025-12-05 19:24:59.073930741 +0000 UTC m=+1276.969153047" observedRunningTime="2025-12-05 19:25:00.943703509 +0000 UTC m=+1278.838925825" watchObservedRunningTime="2025-12-05 19:25:00.951557122 +0000 UTC m=+1278.846779428" Dec 05 19:25:00 crc kubenswrapper[4828]: I1205 19:25:00.999489 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6d6b94f97f-2m6zj" podStartSLOduration=4.281873165 podStartE2EDuration="6.99946921s" podCreationTimestamp="2025-12-05 19:24:54 +0000 UTC" firstStartedPulling="2025-12-05 19:24:56.356293245 +0000 UTC m=+1274.251515541" lastFinishedPulling="2025-12-05 19:24:59.07388928 +0000 UTC m=+1276.969111586" observedRunningTime="2025-12-05 19:25:00.967218787 +0000 UTC m=+1278.862441093" watchObservedRunningTime="2025-12-05 19:25:00.99946921 +0000 UTC m=+1278.894691516" Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.483246 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.718874 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.770510 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md6cb\" (UniqueName: \"kubernetes.io/projected/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-kube-api-access-md6cb\") pod \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\" (UID: \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\") " Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.770572 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-logs\") pod \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\" (UID: \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\") " Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.770592 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-config-data\") pod \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\" (UID: \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\") " Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.770612 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-combined-ca-bundle\") pod \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\" (UID: \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\") " Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.770663 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-etc-machine-id\") pod \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\" (UID: \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\") " Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.770729 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-scripts\") pod \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\" (UID: \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\") " Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.770758 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-config-data-custom\") pod \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\" (UID: \"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29\") " Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.773092 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29" (UID: "5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.777202 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-logs" (OuterVolumeSpecName: "logs") pod "5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29" (UID: "5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.791219 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29" (UID: "5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.803159 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-scripts" (OuterVolumeSpecName: "scripts") pod "5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29" (UID: "5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.803430 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-kube-api-access-md6cb" (OuterVolumeSpecName: "kube-api-access-md6cb") pod "5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29" (UID: "5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29"). InnerVolumeSpecName "kube-api-access-md6cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.850096 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29" (UID: "5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.872702 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-md6cb\" (UniqueName: \"kubernetes.io/projected/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-kube-api-access-md6cb\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.872735 4828 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-logs\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.872744 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.872752 4828 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.872761 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.872768 4828 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.914993 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-config-data" (OuterVolumeSpecName: "config-data") pod "5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29" (UID: "5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.936278 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77b4ccd85-stwwx" Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.939224 4828 generic.go:334] "Generic (PLEG): container finished" podID="ada41f83-4947-4d14-a1c1-c1dd44f7d656" containerID="b8c46648beff69ab8a2e0648f2c05a4aadfdf97eccb9fc3a83ed3db5a2a29f57" exitCode=137 Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.939261 4828 generic.go:334] "Generic (PLEG): container finished" podID="ada41f83-4947-4d14-a1c1-c1dd44f7d656" containerID="d366579dcb0151d659d11ed38464a9aee419b4edeadc63d7760ecb1ec70c9f80" exitCode=137 Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.939323 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b878f489-x2jpv" event={"ID":"ada41f83-4947-4d14-a1c1-c1dd44f7d656","Type":"ContainerDied","Data":"b8c46648beff69ab8a2e0648f2c05a4aadfdf97eccb9fc3a83ed3db5a2a29f57"} Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.939358 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b878f489-x2jpv" event={"ID":"ada41f83-4947-4d14-a1c1-c1dd44f7d656","Type":"ContainerDied","Data":"d366579dcb0151d659d11ed38464a9aee419b4edeadc63d7760ecb1ec70c9f80"} Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.946739 4828 generic.go:334] "Generic (PLEG): container finished" podID="5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29" containerID="fa84a67505dfbebf9ec77eee992291ea6479519439669bcd2afd7d270142b2a4" exitCode=0 Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.946809 4828 generic.go:334] "Generic (PLEG): container finished" podID="5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29" containerID="7fe7e19f57c44a4eade1fdfa82b09e6b3af0bd5b7dc98435c57443c6be14e19b" exitCode=143 Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.946939 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29","Type":"ContainerDied","Data":"fa84a67505dfbebf9ec77eee992291ea6479519439669bcd2afd7d270142b2a4"} Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.947005 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29","Type":"ContainerDied","Data":"7fe7e19f57c44a4eade1fdfa82b09e6b3af0bd5b7dc98435c57443c6be14e19b"} Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.947024 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29","Type":"ContainerDied","Data":"dd9a90d9f121602a9d8949184a06f697b244ea949b447a807aeee7725eac97db"} Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.947043 4828 scope.go:117] "RemoveContainer" containerID="fa84a67505dfbebf9ec77eee992291ea6479519439669bcd2afd7d270142b2a4" Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.947310 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.980769 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw5zq\" (UniqueName: \"kubernetes.io/projected/b940b754-ad6e-454e-ab2a-242b1b63b344-kube-api-access-tw5zq\") pod \"b940b754-ad6e-454e-ab2a-242b1b63b344\" (UID: \"b940b754-ad6e-454e-ab2a-242b1b63b344\") " Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.981653 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b940b754-ad6e-454e-ab2a-242b1b63b344-horizon-secret-key\") pod \"b940b754-ad6e-454e-ab2a-242b1b63b344\" (UID: \"b940b754-ad6e-454e-ab2a-242b1b63b344\") " Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.981693 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b940b754-ad6e-454e-ab2a-242b1b63b344-scripts\") pod \"b940b754-ad6e-454e-ab2a-242b1b63b344\" (UID: \"b940b754-ad6e-454e-ab2a-242b1b63b344\") " Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.981733 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b940b754-ad6e-454e-ab2a-242b1b63b344-config-data\") pod \"b940b754-ad6e-454e-ab2a-242b1b63b344\" (UID: \"b940b754-ad6e-454e-ab2a-242b1b63b344\") " Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.981837 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b940b754-ad6e-454e-ab2a-242b1b63b344-logs\") pod \"b940b754-ad6e-454e-ab2a-242b1b63b344\" (UID: \"b940b754-ad6e-454e-ab2a-242b1b63b344\") " Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.983763 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.985741 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b940b754-ad6e-454e-ab2a-242b1b63b344-kube-api-access-tw5zq" (OuterVolumeSpecName: "kube-api-access-tw5zq") pod "b940b754-ad6e-454e-ab2a-242b1b63b344" (UID: "b940b754-ad6e-454e-ab2a-242b1b63b344"). InnerVolumeSpecName "kube-api-access-tw5zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.987710 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b940b754-ad6e-454e-ab2a-242b1b63b344-logs" (OuterVolumeSpecName: "logs") pod "b940b754-ad6e-454e-ab2a-242b1b63b344" (UID: "b940b754-ad6e-454e-ab2a-242b1b63b344"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.993107 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d665a74-59df-4fd9-924b-c082280c3f13","Type":"ContainerStarted","Data":"48a2a6b4e9efc18d8ec16540cbdfce06b76bc66827257abc04d239be48058871"} Dec 05 19:25:01 crc kubenswrapper[4828]: I1205 19:25:01.993161 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d665a74-59df-4fd9-924b-c082280c3f13","Type":"ContainerStarted","Data":"f5eafbdaa5a9a9ad7b20c2b8e3d88bfe67115458b3a79c1d3ccba19b2895ce29"} Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.001492 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b940b754-ad6e-454e-ab2a-242b1b63b344-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b940b754-ad6e-454e-ab2a-242b1b63b344" (UID: "b940b754-ad6e-454e-ab2a-242b1b63b344"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.004744 4828 scope.go:117] "RemoveContainer" containerID="7fe7e19f57c44a4eade1fdfa82b09e6b3af0bd5b7dc98435c57443c6be14e19b" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.011848 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b940b754-ad6e-454e-ab2a-242b1b63b344-scripts" (OuterVolumeSpecName: "scripts") pod "b940b754-ad6e-454e-ab2a-242b1b63b344" (UID: "b940b754-ad6e-454e-ab2a-242b1b63b344"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.015358 4828 generic.go:334] "Generic (PLEG): container finished" podID="b940b754-ad6e-454e-ab2a-242b1b63b344" containerID="3b379beb33c706eabca98355de0270dc4608cfe637f7bc3d01b527642705ce11" exitCode=137 Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.015393 4828 generic.go:334] "Generic (PLEG): container finished" podID="b940b754-ad6e-454e-ab2a-242b1b63b344" containerID="462f8ae3c0237519374ae74ba74dab416c097a37c47b9880fa6c54a792953107" exitCode=137 Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.015410 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77b4ccd85-stwwx" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.015482 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77b4ccd85-stwwx" event={"ID":"b940b754-ad6e-454e-ab2a-242b1b63b344","Type":"ContainerDied","Data":"3b379beb33c706eabca98355de0270dc4608cfe637f7bc3d01b527642705ce11"} Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.015517 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77b4ccd85-stwwx" event={"ID":"b940b754-ad6e-454e-ab2a-242b1b63b344","Type":"ContainerDied","Data":"462f8ae3c0237519374ae74ba74dab416c097a37c47b9880fa6c54a792953107"} Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.015534 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77b4ccd85-stwwx" event={"ID":"b940b754-ad6e-454e-ab2a-242b1b63b344","Type":"ContainerDied","Data":"7bddc2b78395c6b31d8940454d8c4a3248c9e9823589266f83ccc629fb2e6412"} Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.019546 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.026795 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b940b754-ad6e-454e-ab2a-242b1b63b344-config-data" (OuterVolumeSpecName: "config-data") pod "b940b754-ad6e-454e-ab2a-242b1b63b344" (UID: "b940b754-ad6e-454e-ab2a-242b1b63b344"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.040183 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.058899 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 05 19:25:02 crc kubenswrapper[4828]: E1205 19:25:02.059305 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b940b754-ad6e-454e-ab2a-242b1b63b344" containerName="horizon" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.059318 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="b940b754-ad6e-454e-ab2a-242b1b63b344" containerName="horizon" Dec 05 19:25:02 crc kubenswrapper[4828]: E1205 19:25:02.059350 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b940b754-ad6e-454e-ab2a-242b1b63b344" containerName="horizon-log" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.059357 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="b940b754-ad6e-454e-ab2a-242b1b63b344" containerName="horizon-log" Dec 05 19:25:02 crc kubenswrapper[4828]: E1205 19:25:02.059366 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d9959bb-415b-41e6-ad9d-a2bd2fcedafe" containerName="init" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.059372 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d9959bb-415b-41e6-ad9d-a2bd2fcedafe" containerName="init" Dec 05 19:25:02 crc kubenswrapper[4828]: E1205 19:25:02.059387 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29" containerName="cinder-api" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.059393 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29" containerName="cinder-api" Dec 05 19:25:02 crc kubenswrapper[4828]: E1205 19:25:02.059417 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29" containerName="cinder-api-log" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.059425 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29" containerName="cinder-api-log" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.059620 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29" containerName="cinder-api" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.059642 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="b940b754-ad6e-454e-ab2a-242b1b63b344" containerName="horizon-log" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.059655 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29" containerName="cinder-api-log" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.059668 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="b940b754-ad6e-454e-ab2a-242b1b63b344" containerName="horizon" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.059677 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d9959bb-415b-41e6-ad9d-a2bd2fcedafe" containerName="init" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.060619 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.062148 4828 scope.go:117] "RemoveContainer" containerID="fa84a67505dfbebf9ec77eee992291ea6479519439669bcd2afd7d270142b2a4" Dec 05 19:25:02 crc kubenswrapper[4828]: E1205 19:25:02.065307 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa84a67505dfbebf9ec77eee992291ea6479519439669bcd2afd7d270142b2a4\": container with ID starting with fa84a67505dfbebf9ec77eee992291ea6479519439669bcd2afd7d270142b2a4 not found: ID does not exist" containerID="fa84a67505dfbebf9ec77eee992291ea6479519439669bcd2afd7d270142b2a4" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.065349 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa84a67505dfbebf9ec77eee992291ea6479519439669bcd2afd7d270142b2a4"} err="failed to get container status \"fa84a67505dfbebf9ec77eee992291ea6479519439669bcd2afd7d270142b2a4\": rpc error: code = NotFound desc = could not find container \"fa84a67505dfbebf9ec77eee992291ea6479519439669bcd2afd7d270142b2a4\": container with ID starting with fa84a67505dfbebf9ec77eee992291ea6479519439669bcd2afd7d270142b2a4 not found: ID does not exist" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.065375 4828 scope.go:117] "RemoveContainer" containerID="7fe7e19f57c44a4eade1fdfa82b09e6b3af0bd5b7dc98435c57443c6be14e19b" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.066096 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.066441 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.066584 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 05 19:25:02 crc kubenswrapper[4828]: E1205 19:25:02.067241 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fe7e19f57c44a4eade1fdfa82b09e6b3af0bd5b7dc98435c57443c6be14e19b\": container with ID starting with 7fe7e19f57c44a4eade1fdfa82b09e6b3af0bd5b7dc98435c57443c6be14e19b not found: ID does not exist" containerID="7fe7e19f57c44a4eade1fdfa82b09e6b3af0bd5b7dc98435c57443c6be14e19b" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.067280 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fe7e19f57c44a4eade1fdfa82b09e6b3af0bd5b7dc98435c57443c6be14e19b"} err="failed to get container status \"7fe7e19f57c44a4eade1fdfa82b09e6b3af0bd5b7dc98435c57443c6be14e19b\": rpc error: code = NotFound desc = could not find container \"7fe7e19f57c44a4eade1fdfa82b09e6b3af0bd5b7dc98435c57443c6be14e19b\": container with ID starting with 7fe7e19f57c44a4eade1fdfa82b09e6b3af0bd5b7dc98435c57443c6be14e19b not found: ID does not exist" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.067297 4828 scope.go:117] "RemoveContainer" containerID="fa84a67505dfbebf9ec77eee992291ea6479519439669bcd2afd7d270142b2a4" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.067572 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa84a67505dfbebf9ec77eee992291ea6479519439669bcd2afd7d270142b2a4"} err="failed to get container status \"fa84a67505dfbebf9ec77eee992291ea6479519439669bcd2afd7d270142b2a4\": rpc error: code = NotFound desc = could not find container \"fa84a67505dfbebf9ec77eee992291ea6479519439669bcd2afd7d270142b2a4\": container with ID starting with fa84a67505dfbebf9ec77eee992291ea6479519439669bcd2afd7d270142b2a4 not found: ID does not exist" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.067592 4828 scope.go:117] "RemoveContainer" containerID="7fe7e19f57c44a4eade1fdfa82b09e6b3af0bd5b7dc98435c57443c6be14e19b" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.074404 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.075917 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fe7e19f57c44a4eade1fdfa82b09e6b3af0bd5b7dc98435c57443c6be14e19b"} err="failed to get container status \"7fe7e19f57c44a4eade1fdfa82b09e6b3af0bd5b7dc98435c57443c6be14e19b\": rpc error: code = NotFound desc = could not find container \"7fe7e19f57c44a4eade1fdfa82b09e6b3af0bd5b7dc98435c57443c6be14e19b\": container with ID starting with 7fe7e19f57c44a4eade1fdfa82b09e6b3af0bd5b7dc98435c57443c6be14e19b not found: ID does not exist" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.076172 4828 scope.go:117] "RemoveContainer" containerID="3b379beb33c706eabca98355de0270dc4608cfe637f7bc3d01b527642705ce11" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.085020 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpbpv\" (UniqueName: \"kubernetes.io/projected/55a11269-8096-4009-a3b0-44f7d554fe4f-kube-api-access-zpbpv\") pod \"cinder-api-0\" (UID: \"55a11269-8096-4009-a3b0-44f7d554fe4f\") " pod="openstack/cinder-api-0" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.085055 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55a11269-8096-4009-a3b0-44f7d554fe4f-scripts\") pod \"cinder-api-0\" (UID: \"55a11269-8096-4009-a3b0-44f7d554fe4f\") " pod="openstack/cinder-api-0" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.085168 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55a11269-8096-4009-a3b0-44f7d554fe4f-logs\") pod \"cinder-api-0\" (UID: \"55a11269-8096-4009-a3b0-44f7d554fe4f\") " pod="openstack/cinder-api-0" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.085184 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55a11269-8096-4009-a3b0-44f7d554fe4f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"55a11269-8096-4009-a3b0-44f7d554fe4f\") " pod="openstack/cinder-api-0" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.085204 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55a11269-8096-4009-a3b0-44f7d554fe4f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"55a11269-8096-4009-a3b0-44f7d554fe4f\") " pod="openstack/cinder-api-0" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.085235 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/55a11269-8096-4009-a3b0-44f7d554fe4f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"55a11269-8096-4009-a3b0-44f7d554fe4f\") " pod="openstack/cinder-api-0" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.085314 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55a11269-8096-4009-a3b0-44f7d554fe4f-config-data-custom\") pod \"cinder-api-0\" (UID: \"55a11269-8096-4009-a3b0-44f7d554fe4f\") " pod="openstack/cinder-api-0" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.085364 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55a11269-8096-4009-a3b0-44f7d554fe4f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"55a11269-8096-4009-a3b0-44f7d554fe4f\") " pod="openstack/cinder-api-0" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.085413 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55a11269-8096-4009-a3b0-44f7d554fe4f-config-data\") pod \"cinder-api-0\" (UID: \"55a11269-8096-4009-a3b0-44f7d554fe4f\") " pod="openstack/cinder-api-0" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.085481 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tw5zq\" (UniqueName: \"kubernetes.io/projected/b940b754-ad6e-454e-ab2a-242b1b63b344-kube-api-access-tw5zq\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.085491 4828 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b940b754-ad6e-454e-ab2a-242b1b63b344-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.085501 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b940b754-ad6e-454e-ab2a-242b1b63b344-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.085509 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b940b754-ad6e-454e-ab2a-242b1b63b344-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.085517 4828 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b940b754-ad6e-454e-ab2a-242b1b63b344-logs\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.187386 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpbpv\" (UniqueName: \"kubernetes.io/projected/55a11269-8096-4009-a3b0-44f7d554fe4f-kube-api-access-zpbpv\") pod \"cinder-api-0\" (UID: \"55a11269-8096-4009-a3b0-44f7d554fe4f\") " pod="openstack/cinder-api-0" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.187426 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55a11269-8096-4009-a3b0-44f7d554fe4f-scripts\") pod \"cinder-api-0\" (UID: \"55a11269-8096-4009-a3b0-44f7d554fe4f\") " pod="openstack/cinder-api-0" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.187497 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55a11269-8096-4009-a3b0-44f7d554fe4f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"55a11269-8096-4009-a3b0-44f7d554fe4f\") " pod="openstack/cinder-api-0" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.187526 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55a11269-8096-4009-a3b0-44f7d554fe4f-logs\") pod \"cinder-api-0\" (UID: \"55a11269-8096-4009-a3b0-44f7d554fe4f\") " pod="openstack/cinder-api-0" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.187543 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55a11269-8096-4009-a3b0-44f7d554fe4f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"55a11269-8096-4009-a3b0-44f7d554fe4f\") " pod="openstack/cinder-api-0" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.187567 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/55a11269-8096-4009-a3b0-44f7d554fe4f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"55a11269-8096-4009-a3b0-44f7d554fe4f\") " pod="openstack/cinder-api-0" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.187607 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55a11269-8096-4009-a3b0-44f7d554fe4f-config-data-custom\") pod \"cinder-api-0\" (UID: \"55a11269-8096-4009-a3b0-44f7d554fe4f\") " pod="openstack/cinder-api-0" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.187649 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55a11269-8096-4009-a3b0-44f7d554fe4f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"55a11269-8096-4009-a3b0-44f7d554fe4f\") " pod="openstack/cinder-api-0" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.187694 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55a11269-8096-4009-a3b0-44f7d554fe4f-config-data\") pod \"cinder-api-0\" (UID: \"55a11269-8096-4009-a3b0-44f7d554fe4f\") " pod="openstack/cinder-api-0" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.188920 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/55a11269-8096-4009-a3b0-44f7d554fe4f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"55a11269-8096-4009-a3b0-44f7d554fe4f\") " pod="openstack/cinder-api-0" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.189154 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55a11269-8096-4009-a3b0-44f7d554fe4f-logs\") pod \"cinder-api-0\" (UID: \"55a11269-8096-4009-a3b0-44f7d554fe4f\") " pod="openstack/cinder-api-0" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.195894 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55a11269-8096-4009-a3b0-44f7d554fe4f-config-data-custom\") pod \"cinder-api-0\" (UID: \"55a11269-8096-4009-a3b0-44f7d554fe4f\") " pod="openstack/cinder-api-0" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.197497 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55a11269-8096-4009-a3b0-44f7d554fe4f-scripts\") pod \"cinder-api-0\" (UID: \"55a11269-8096-4009-a3b0-44f7d554fe4f\") " pod="openstack/cinder-api-0" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.198468 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55a11269-8096-4009-a3b0-44f7d554fe4f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"55a11269-8096-4009-a3b0-44f7d554fe4f\") " pod="openstack/cinder-api-0" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.198574 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55a11269-8096-4009-a3b0-44f7d554fe4f-config-data\") pod \"cinder-api-0\" (UID: \"55a11269-8096-4009-a3b0-44f7d554fe4f\") " pod="openstack/cinder-api-0" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.203036 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55a11269-8096-4009-a3b0-44f7d554fe4f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"55a11269-8096-4009-a3b0-44f7d554fe4f\") " pod="openstack/cinder-api-0" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.203459 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55a11269-8096-4009-a3b0-44f7d554fe4f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"55a11269-8096-4009-a3b0-44f7d554fe4f\") " pod="openstack/cinder-api-0" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.212692 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpbpv\" (UniqueName: \"kubernetes.io/projected/55a11269-8096-4009-a3b0-44f7d554fe4f-kube-api-access-zpbpv\") pod \"cinder-api-0\" (UID: \"55a11269-8096-4009-a3b0-44f7d554fe4f\") " pod="openstack/cinder-api-0" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.259184 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-8769b7dc8-87tcr"] Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.260722 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8769b7dc8-87tcr" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.269383 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.274607 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.275731 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8769b7dc8-87tcr"] Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.279673 4828 scope.go:117] "RemoveContainer" containerID="462f8ae3c0237519374ae74ba74dab416c097a37c47b9880fa6c54a792953107" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.288880 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b34386c-5f6a-420f-8889-5dd31e8560c0-combined-ca-bundle\") pod \"barbican-api-8769b7dc8-87tcr\" (UID: \"7b34386c-5f6a-420f-8889-5dd31e8560c0\") " pod="openstack/barbican-api-8769b7dc8-87tcr" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.293020 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrl8z\" (UniqueName: \"kubernetes.io/projected/7b34386c-5f6a-420f-8889-5dd31e8560c0-kube-api-access-rrl8z\") pod \"barbican-api-8769b7dc8-87tcr\" (UID: \"7b34386c-5f6a-420f-8889-5dd31e8560c0\") " pod="openstack/barbican-api-8769b7dc8-87tcr" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.293198 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b34386c-5f6a-420f-8889-5dd31e8560c0-public-tls-certs\") pod \"barbican-api-8769b7dc8-87tcr\" (UID: \"7b34386c-5f6a-420f-8889-5dd31e8560c0\") " pod="openstack/barbican-api-8769b7dc8-87tcr" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.293318 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b34386c-5f6a-420f-8889-5dd31e8560c0-logs\") pod \"barbican-api-8769b7dc8-87tcr\" (UID: \"7b34386c-5f6a-420f-8889-5dd31e8560c0\") " pod="openstack/barbican-api-8769b7dc8-87tcr" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.293396 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b34386c-5f6a-420f-8889-5dd31e8560c0-config-data\") pod \"barbican-api-8769b7dc8-87tcr\" (UID: \"7b34386c-5f6a-420f-8889-5dd31e8560c0\") " pod="openstack/barbican-api-8769b7dc8-87tcr" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.293423 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b34386c-5f6a-420f-8889-5dd31e8560c0-internal-tls-certs\") pod \"barbican-api-8769b7dc8-87tcr\" (UID: \"7b34386c-5f6a-420f-8889-5dd31e8560c0\") " pod="openstack/barbican-api-8769b7dc8-87tcr" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.293507 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b34386c-5f6a-420f-8889-5dd31e8560c0-config-data-custom\") pod \"barbican-api-8769b7dc8-87tcr\" (UID: \"7b34386c-5f6a-420f-8889-5dd31e8560c0\") " pod="openstack/barbican-api-8769b7dc8-87tcr" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.380227 4828 scope.go:117] "RemoveContainer" containerID="3b379beb33c706eabca98355de0270dc4608cfe637f7bc3d01b527642705ce11" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.380764 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b878f489-x2jpv" Dec 05 19:25:02 crc kubenswrapper[4828]: E1205 19:25:02.381151 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b379beb33c706eabca98355de0270dc4608cfe637f7bc3d01b527642705ce11\": container with ID starting with 3b379beb33c706eabca98355de0270dc4608cfe637f7bc3d01b527642705ce11 not found: ID does not exist" containerID="3b379beb33c706eabca98355de0270dc4608cfe637f7bc3d01b527642705ce11" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.381174 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b379beb33c706eabca98355de0270dc4608cfe637f7bc3d01b527642705ce11"} err="failed to get container status \"3b379beb33c706eabca98355de0270dc4608cfe637f7bc3d01b527642705ce11\": rpc error: code = NotFound desc = could not find container \"3b379beb33c706eabca98355de0270dc4608cfe637f7bc3d01b527642705ce11\": container with ID starting with 3b379beb33c706eabca98355de0270dc4608cfe637f7bc3d01b527642705ce11 not found: ID does not exist" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.381202 4828 scope.go:117] "RemoveContainer" containerID="462f8ae3c0237519374ae74ba74dab416c097a37c47b9880fa6c54a792953107" Dec 05 19:25:02 crc kubenswrapper[4828]: E1205 19:25:02.382558 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"462f8ae3c0237519374ae74ba74dab416c097a37c47b9880fa6c54a792953107\": container with ID starting with 462f8ae3c0237519374ae74ba74dab416c097a37c47b9880fa6c54a792953107 not found: ID does not exist" containerID="462f8ae3c0237519374ae74ba74dab416c097a37c47b9880fa6c54a792953107" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.382579 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"462f8ae3c0237519374ae74ba74dab416c097a37c47b9880fa6c54a792953107"} err="failed to get container status \"462f8ae3c0237519374ae74ba74dab416c097a37c47b9880fa6c54a792953107\": rpc error: code = NotFound desc = could not find container \"462f8ae3c0237519374ae74ba74dab416c097a37c47b9880fa6c54a792953107\": container with ID starting with 462f8ae3c0237519374ae74ba74dab416c097a37c47b9880fa6c54a792953107 not found: ID does not exist" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.382594 4828 scope.go:117] "RemoveContainer" containerID="3b379beb33c706eabca98355de0270dc4608cfe637f7bc3d01b527642705ce11" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.382899 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b379beb33c706eabca98355de0270dc4608cfe637f7bc3d01b527642705ce11"} err="failed to get container status \"3b379beb33c706eabca98355de0270dc4608cfe637f7bc3d01b527642705ce11\": rpc error: code = NotFound desc = could not find container \"3b379beb33c706eabca98355de0270dc4608cfe637f7bc3d01b527642705ce11\": container with ID starting with 3b379beb33c706eabca98355de0270dc4608cfe637f7bc3d01b527642705ce11 not found: ID does not exist" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.382923 4828 scope.go:117] "RemoveContainer" containerID="462f8ae3c0237519374ae74ba74dab416c097a37c47b9880fa6c54a792953107" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.385473 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"462f8ae3c0237519374ae74ba74dab416c097a37c47b9880fa6c54a792953107"} err="failed to get container status \"462f8ae3c0237519374ae74ba74dab416c097a37c47b9880fa6c54a792953107\": rpc error: code = NotFound desc = could not find container \"462f8ae3c0237519374ae74ba74dab416c097a37c47b9880fa6c54a792953107\": container with ID starting with 462f8ae3c0237519374ae74ba74dab416c097a37c47b9880fa6c54a792953107 not found: ID does not exist" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.386840 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77b4ccd85-stwwx"] Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.391008 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.394450 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ada41f83-4947-4d14-a1c1-c1dd44f7d656-config-data\") pod \"ada41f83-4947-4d14-a1c1-c1dd44f7d656\" (UID: \"ada41f83-4947-4d14-a1c1-c1dd44f7d656\") " Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.394538 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ada41f83-4947-4d14-a1c1-c1dd44f7d656-logs\") pod \"ada41f83-4947-4d14-a1c1-c1dd44f7d656\" (UID: \"ada41f83-4947-4d14-a1c1-c1dd44f7d656\") " Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.394598 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ada41f83-4947-4d14-a1c1-c1dd44f7d656-horizon-secret-key\") pod \"ada41f83-4947-4d14-a1c1-c1dd44f7d656\" (UID: \"ada41f83-4947-4d14-a1c1-c1dd44f7d656\") " Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.394643 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd6dq\" (UniqueName: \"kubernetes.io/projected/ada41f83-4947-4d14-a1c1-c1dd44f7d656-kube-api-access-qd6dq\") pod \"ada41f83-4947-4d14-a1c1-c1dd44f7d656\" (UID: \"ada41f83-4947-4d14-a1c1-c1dd44f7d656\") " Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.394754 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ada41f83-4947-4d14-a1c1-c1dd44f7d656-scripts\") pod \"ada41f83-4947-4d14-a1c1-c1dd44f7d656\" (UID: \"ada41f83-4947-4d14-a1c1-c1dd44f7d656\") " Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.395143 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ada41f83-4947-4d14-a1c1-c1dd44f7d656-logs" (OuterVolumeSpecName: "logs") pod "ada41f83-4947-4d14-a1c1-c1dd44f7d656" (UID: "ada41f83-4947-4d14-a1c1-c1dd44f7d656"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.395160 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b34386c-5f6a-420f-8889-5dd31e8560c0-public-tls-certs\") pod \"barbican-api-8769b7dc8-87tcr\" (UID: \"7b34386c-5f6a-420f-8889-5dd31e8560c0\") " pod="openstack/barbican-api-8769b7dc8-87tcr" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.395339 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b34386c-5f6a-420f-8889-5dd31e8560c0-logs\") pod \"barbican-api-8769b7dc8-87tcr\" (UID: \"7b34386c-5f6a-420f-8889-5dd31e8560c0\") " pod="openstack/barbican-api-8769b7dc8-87tcr" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.395417 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b34386c-5f6a-420f-8889-5dd31e8560c0-config-data\") pod \"barbican-api-8769b7dc8-87tcr\" (UID: \"7b34386c-5f6a-420f-8889-5dd31e8560c0\") " pod="openstack/barbican-api-8769b7dc8-87tcr" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.395450 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b34386c-5f6a-420f-8889-5dd31e8560c0-internal-tls-certs\") pod \"barbican-api-8769b7dc8-87tcr\" (UID: \"7b34386c-5f6a-420f-8889-5dd31e8560c0\") " pod="openstack/barbican-api-8769b7dc8-87tcr" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.395626 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b34386c-5f6a-420f-8889-5dd31e8560c0-config-data-custom\") pod \"barbican-api-8769b7dc8-87tcr\" (UID: \"7b34386c-5f6a-420f-8889-5dd31e8560c0\") " pod="openstack/barbican-api-8769b7dc8-87tcr" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.395919 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b34386c-5f6a-420f-8889-5dd31e8560c0-combined-ca-bundle\") pod \"barbican-api-8769b7dc8-87tcr\" (UID: \"7b34386c-5f6a-420f-8889-5dd31e8560c0\") " pod="openstack/barbican-api-8769b7dc8-87tcr" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.395967 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrl8z\" (UniqueName: \"kubernetes.io/projected/7b34386c-5f6a-420f-8889-5dd31e8560c0-kube-api-access-rrl8z\") pod \"barbican-api-8769b7dc8-87tcr\" (UID: \"7b34386c-5f6a-420f-8889-5dd31e8560c0\") " pod="openstack/barbican-api-8769b7dc8-87tcr" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.399993 4828 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ada41f83-4947-4d14-a1c1-c1dd44f7d656-logs\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.400950 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b34386c-5f6a-420f-8889-5dd31e8560c0-logs\") pod \"barbican-api-8769b7dc8-87tcr\" (UID: \"7b34386c-5f6a-420f-8889-5dd31e8560c0\") " pod="openstack/barbican-api-8769b7dc8-87tcr" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.412738 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-77b4ccd85-stwwx"] Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.457381 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ada41f83-4947-4d14-a1c1-c1dd44f7d656-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ada41f83-4947-4d14-a1c1-c1dd44f7d656" (UID: "ada41f83-4947-4d14-a1c1-c1dd44f7d656"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.459393 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b34386c-5f6a-420f-8889-5dd31e8560c0-public-tls-certs\") pod \"barbican-api-8769b7dc8-87tcr\" (UID: \"7b34386c-5f6a-420f-8889-5dd31e8560c0\") " pod="openstack/barbican-api-8769b7dc8-87tcr" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.463853 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ada41f83-4947-4d14-a1c1-c1dd44f7d656-config-data" (OuterVolumeSpecName: "config-data") pod "ada41f83-4947-4d14-a1c1-c1dd44f7d656" (UID: "ada41f83-4947-4d14-a1c1-c1dd44f7d656"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.465842 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b34386c-5f6a-420f-8889-5dd31e8560c0-config-data\") pod \"barbican-api-8769b7dc8-87tcr\" (UID: \"7b34386c-5f6a-420f-8889-5dd31e8560c0\") " pod="openstack/barbican-api-8769b7dc8-87tcr" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.490998 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b34386c-5f6a-420f-8889-5dd31e8560c0-config-data-custom\") pod \"barbican-api-8769b7dc8-87tcr\" (UID: \"7b34386c-5f6a-420f-8889-5dd31e8560c0\") " pod="openstack/barbican-api-8769b7dc8-87tcr" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.491544 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b34386c-5f6a-420f-8889-5dd31e8560c0-internal-tls-certs\") pod \"barbican-api-8769b7dc8-87tcr\" (UID: \"7b34386c-5f6a-420f-8889-5dd31e8560c0\") " pod="openstack/barbican-api-8769b7dc8-87tcr" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.492579 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrl8z\" (UniqueName: \"kubernetes.io/projected/7b34386c-5f6a-420f-8889-5dd31e8560c0-kube-api-access-rrl8z\") pod \"barbican-api-8769b7dc8-87tcr\" (UID: \"7b34386c-5f6a-420f-8889-5dd31e8560c0\") " pod="openstack/barbican-api-8769b7dc8-87tcr" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.498962 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ada41f83-4947-4d14-a1c1-c1dd44f7d656-kube-api-access-qd6dq" (OuterVolumeSpecName: "kube-api-access-qd6dq") pod "ada41f83-4947-4d14-a1c1-c1dd44f7d656" (UID: "ada41f83-4947-4d14-a1c1-c1dd44f7d656"). InnerVolumeSpecName "kube-api-access-qd6dq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.500275 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b34386c-5f6a-420f-8889-5dd31e8560c0-combined-ca-bundle\") pod \"barbican-api-8769b7dc8-87tcr\" (UID: \"7b34386c-5f6a-420f-8889-5dd31e8560c0\") " pod="openstack/barbican-api-8769b7dc8-87tcr" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.502399 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ada41f83-4947-4d14-a1c1-c1dd44f7d656-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.502430 4828 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ada41f83-4947-4d14-a1c1-c1dd44f7d656-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.502448 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd6dq\" (UniqueName: \"kubernetes.io/projected/ada41f83-4947-4d14-a1c1-c1dd44f7d656-kube-api-access-qd6dq\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.530233 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29" path="/var/lib/kubelet/pods/5bdac6f1-0e4c-4d67-b6ad-39b5d7157a29/volumes" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.530585 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ada41f83-4947-4d14-a1c1-c1dd44f7d656-scripts" (OuterVolumeSpecName: "scripts") pod "ada41f83-4947-4d14-a1c1-c1dd44f7d656" (UID: "ada41f83-4947-4d14-a1c1-c1dd44f7d656"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.531320 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b940b754-ad6e-454e-ab2a-242b1b63b344" path="/var/lib/kubelet/pods/b940b754-ad6e-454e-ab2a-242b1b63b344/volumes" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.606161 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ada41f83-4947-4d14-a1c1-c1dd44f7d656-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.667065 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8769b7dc8-87tcr" Dec 05 19:25:02 crc kubenswrapper[4828]: I1205 19:25:02.970952 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 05 19:25:02 crc kubenswrapper[4828]: W1205 19:25:02.979727 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55a11269_8096_4009_a3b0_44f7d554fe4f.slice/crio-52394aea7b0146903380d667fd0e160e4a8a2b626518f2f91ae5d7dbd85e96ee WatchSource:0}: Error finding container 52394aea7b0146903380d667fd0e160e4a8a2b626518f2f91ae5d7dbd85e96ee: Status 404 returned error can't find the container with id 52394aea7b0146903380d667fd0e160e4a8a2b626518f2f91ae5d7dbd85e96ee Dec 05 19:25:03 crc kubenswrapper[4828]: I1205 19:25:03.026335 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"55a11269-8096-4009-a3b0-44f7d554fe4f","Type":"ContainerStarted","Data":"52394aea7b0146903380d667fd0e160e4a8a2b626518f2f91ae5d7dbd85e96ee"} Dec 05 19:25:03 crc kubenswrapper[4828]: I1205 19:25:03.032891 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b878f489-x2jpv" event={"ID":"ada41f83-4947-4d14-a1c1-c1dd44f7d656","Type":"ContainerDied","Data":"a1622b9715999824e58e4dd7f6b8c987ec97012e8cf5df82182203cf70f8cbe6"} Dec 05 19:25:03 crc kubenswrapper[4828]: I1205 19:25:03.032943 4828 scope.go:117] "RemoveContainer" containerID="b8c46648beff69ab8a2e0648f2c05a4aadfdf97eccb9fc3a83ed3db5a2a29f57" Dec 05 19:25:03 crc kubenswrapper[4828]: I1205 19:25:03.033082 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b878f489-x2jpv" Dec 05 19:25:03 crc kubenswrapper[4828]: I1205 19:25:03.059680 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-b878f489-x2jpv"] Dec 05 19:25:03 crc kubenswrapper[4828]: I1205 19:25:03.068022 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-b878f489-x2jpv"] Dec 05 19:25:03 crc kubenswrapper[4828]: I1205 19:25:03.125840 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8769b7dc8-87tcr"] Dec 05 19:25:03 crc kubenswrapper[4828]: I1205 19:25:03.259088 4828 scope.go:117] "RemoveContainer" containerID="d366579dcb0151d659d11ed38464a9aee419b4edeadc63d7760ecb1ec70c9f80" Dec 05 19:25:03 crc kubenswrapper[4828]: W1205 19:25:03.264851 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b34386c_5f6a_420f_8889_5dd31e8560c0.slice/crio-38591975efb3cb98b0437029764b67fa4b4a176dc77568f7dc49a2896820d97a WatchSource:0}: Error finding container 38591975efb3cb98b0437029764b67fa4b4a176dc77568f7dc49a2896820d97a: Status 404 returned error can't find the container with id 38591975efb3cb98b0437029764b67fa4b4a176dc77568f7dc49a2896820d97a Dec 05 19:25:04 crc kubenswrapper[4828]: I1205 19:25:04.104760 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d665a74-59df-4fd9-924b-c082280c3f13","Type":"ContainerStarted","Data":"0619fde435e3394815f01c932a85580772e7d9421c17b9fe4986fcd75a22eb13"} Dec 05 19:25:04 crc kubenswrapper[4828]: I1205 19:25:04.105157 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 19:25:04 crc kubenswrapper[4828]: I1205 19:25:04.108175 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"55a11269-8096-4009-a3b0-44f7d554fe4f","Type":"ContainerStarted","Data":"ea1dd2827b46df201582b72fc60797e291af412d06d3c693c41d9f35bfa46c36"} Dec 05 19:25:04 crc kubenswrapper[4828]: I1205 19:25:04.112285 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8769b7dc8-87tcr" event={"ID":"7b34386c-5f6a-420f-8889-5dd31e8560c0","Type":"ContainerStarted","Data":"d20714e4c212ab554426a4a5167d410aa522282e033d0157a6372de0e086dd6f"} Dec 05 19:25:04 crc kubenswrapper[4828]: I1205 19:25:04.112353 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8769b7dc8-87tcr" event={"ID":"7b34386c-5f6a-420f-8889-5dd31e8560c0","Type":"ContainerStarted","Data":"e9f42af3036147319c2218c573205e98a75002980b48c8e3e03ab1af02ede22f"} Dec 05 19:25:04 crc kubenswrapper[4828]: I1205 19:25:04.112367 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8769b7dc8-87tcr" event={"ID":"7b34386c-5f6a-420f-8889-5dd31e8560c0","Type":"ContainerStarted","Data":"38591975efb3cb98b0437029764b67fa4b4a176dc77568f7dc49a2896820d97a"} Dec 05 19:25:04 crc kubenswrapper[4828]: I1205 19:25:04.113257 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8769b7dc8-87tcr" Dec 05 19:25:04 crc kubenswrapper[4828]: I1205 19:25:04.113291 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8769b7dc8-87tcr" Dec 05 19:25:04 crc kubenswrapper[4828]: I1205 19:25:04.132595 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.290408433 podStartE2EDuration="6.1325745s" podCreationTimestamp="2025-12-05 19:24:58 +0000 UTC" firstStartedPulling="2025-12-05 19:24:59.500684041 +0000 UTC m=+1277.395906347" lastFinishedPulling="2025-12-05 19:25:03.342850108 +0000 UTC m=+1281.238072414" observedRunningTime="2025-12-05 19:25:04.127788901 +0000 UTC m=+1282.023011217" watchObservedRunningTime="2025-12-05 19:25:04.1325745 +0000 UTC m=+1282.027796806" Dec 05 19:25:04 crc kubenswrapper[4828]: I1205 19:25:04.153812 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-8769b7dc8-87tcr" podStartSLOduration=2.153788375 podStartE2EDuration="2.153788375s" podCreationTimestamp="2025-12-05 19:25:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:25:04.153386294 +0000 UTC m=+1282.048608610" watchObservedRunningTime="2025-12-05 19:25:04.153788375 +0000 UTC m=+1282.049010681" Dec 05 19:25:04 crc kubenswrapper[4828]: I1205 19:25:04.458069 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ada41f83-4947-4d14-a1c1-c1dd44f7d656" path="/var/lib/kubelet/pods/ada41f83-4947-4d14-a1c1-c1dd44f7d656/volumes" Dec 05 19:25:05 crc kubenswrapper[4828]: I1205 19:25:05.125418 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"55a11269-8096-4009-a3b0-44f7d554fe4f","Type":"ContainerStarted","Data":"7e92cb24364f42573c82d1f0d869286800dcffa3f8de08091b9c3a7788ec47f4"} Dec 05 19:25:05 crc kubenswrapper[4828]: I1205 19:25:05.143971 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.143954656 podStartE2EDuration="4.143954656s" podCreationTimestamp="2025-12-05 19:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:25:05.141421509 +0000 UTC m=+1283.036643825" watchObservedRunningTime="2025-12-05 19:25:05.143954656 +0000 UTC m=+1283.039176962" Dec 05 19:25:06 crc kubenswrapper[4828]: I1205 19:25:06.133642 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 05 19:25:06 crc kubenswrapper[4828]: I1205 19:25:06.753517 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-69cd8bdf78-r2wnx" podUID="fbf14167-7355-4185-b0c5-8258f1c43132" containerName="barbican-api-log" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 19:25:06 crc kubenswrapper[4828]: I1205 19:25:06.868966 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" Dec 05 19:25:06 crc kubenswrapper[4828]: I1205 19:25:06.901625 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 05 19:25:06 crc kubenswrapper[4828]: I1205 19:25:06.938767 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-xk8rt"] Dec 05 19:25:06 crc kubenswrapper[4828]: I1205 19:25:06.939080 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" podUID="e5ac966d-0aae-4f8f-a38b-2debce3a8e64" containerName="dnsmasq-dns" containerID="cri-o://ba2fdb4af29467ade541b486f75fe8d0339ef5335c74e270382607186258287b" gracePeriod=10 Dec 05 19:25:06 crc kubenswrapper[4828]: I1205 19:25:06.970300 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 19:25:07 crc kubenswrapper[4828]: I1205 19:25:07.151360 4828 generic.go:334] "Generic (PLEG): container finished" podID="e5ac966d-0aae-4f8f-a38b-2debce3a8e64" containerID="ba2fdb4af29467ade541b486f75fe8d0339ef5335c74e270382607186258287b" exitCode=0 Dec 05 19:25:07 crc kubenswrapper[4828]: I1205 19:25:07.151415 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" event={"ID":"e5ac966d-0aae-4f8f-a38b-2debce3a8e64","Type":"ContainerDied","Data":"ba2fdb4af29467ade541b486f75fe8d0339ef5335c74e270382607186258287b"} Dec 05 19:25:07 crc kubenswrapper[4828]: I1205 19:25:07.151842 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a63cd477-b66e-4a12-bec5-a5a39b37b0eb" containerName="cinder-scheduler" containerID="cri-o://ffba588f50d123a22a4afb599fe96b324a8ae164d939dbf470876d79316010b4" gracePeriod=30 Dec 05 19:25:07 crc kubenswrapper[4828]: I1205 19:25:07.151931 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a63cd477-b66e-4a12-bec5-a5a39b37b0eb" containerName="probe" containerID="cri-o://75bad46a06073b85a2e5713d562066b478f6aa039726f3643c2b9c1e9b151f84" gracePeriod=30 Dec 05 19:25:07 crc kubenswrapper[4828]: I1205 19:25:07.288142 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-69cd8bdf78-r2wnx" Dec 05 19:25:07 crc kubenswrapper[4828]: I1205 19:25:07.403188 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-69cd8bdf78-r2wnx" Dec 05 19:25:07 crc kubenswrapper[4828]: I1205 19:25:07.652479 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" Dec 05 19:25:07 crc kubenswrapper[4828]: I1205 19:25:07.713650 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-config\") pod \"e5ac966d-0aae-4f8f-a38b-2debce3a8e64\" (UID: \"e5ac966d-0aae-4f8f-a38b-2debce3a8e64\") " Dec 05 19:25:07 crc kubenswrapper[4828]: I1205 19:25:07.713776 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-dns-swift-storage-0\") pod \"e5ac966d-0aae-4f8f-a38b-2debce3a8e64\" (UID: \"e5ac966d-0aae-4f8f-a38b-2debce3a8e64\") " Dec 05 19:25:07 crc kubenswrapper[4828]: I1205 19:25:07.713850 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-dns-svc\") pod \"e5ac966d-0aae-4f8f-a38b-2debce3a8e64\" (UID: \"e5ac966d-0aae-4f8f-a38b-2debce3a8e64\") " Dec 05 19:25:07 crc kubenswrapper[4828]: I1205 19:25:07.713920 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-ovsdbserver-sb\") pod \"e5ac966d-0aae-4f8f-a38b-2debce3a8e64\" (UID: \"e5ac966d-0aae-4f8f-a38b-2debce3a8e64\") " Dec 05 19:25:07 crc kubenswrapper[4828]: I1205 19:25:07.713937 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-ovsdbserver-nb\") pod \"e5ac966d-0aae-4f8f-a38b-2debce3a8e64\" (UID: \"e5ac966d-0aae-4f8f-a38b-2debce3a8e64\") " Dec 05 19:25:07 crc kubenswrapper[4828]: I1205 19:25:07.713986 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psxsh\" (UniqueName: \"kubernetes.io/projected/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-kube-api-access-psxsh\") pod \"e5ac966d-0aae-4f8f-a38b-2debce3a8e64\" (UID: \"e5ac966d-0aae-4f8f-a38b-2debce3a8e64\") " Dec 05 19:25:07 crc kubenswrapper[4828]: I1205 19:25:07.779229 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-kube-api-access-psxsh" (OuterVolumeSpecName: "kube-api-access-psxsh") pod "e5ac966d-0aae-4f8f-a38b-2debce3a8e64" (UID: "e5ac966d-0aae-4f8f-a38b-2debce3a8e64"). InnerVolumeSpecName "kube-api-access-psxsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:25:07 crc kubenswrapper[4828]: I1205 19:25:07.826165 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psxsh\" (UniqueName: \"kubernetes.io/projected/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-kube-api-access-psxsh\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:07 crc kubenswrapper[4828]: I1205 19:25:07.878096 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e5ac966d-0aae-4f8f-a38b-2debce3a8e64" (UID: "e5ac966d-0aae-4f8f-a38b-2debce3a8e64"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:25:07 crc kubenswrapper[4828]: I1205 19:25:07.886344 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e5ac966d-0aae-4f8f-a38b-2debce3a8e64" (UID: "e5ac966d-0aae-4f8f-a38b-2debce3a8e64"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:25:07 crc kubenswrapper[4828]: I1205 19:25:07.889456 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e5ac966d-0aae-4f8f-a38b-2debce3a8e64" (UID: "e5ac966d-0aae-4f8f-a38b-2debce3a8e64"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:25:07 crc kubenswrapper[4828]: I1205 19:25:07.901006 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e5ac966d-0aae-4f8f-a38b-2debce3a8e64" (UID: "e5ac966d-0aae-4f8f-a38b-2debce3a8e64"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:25:07 crc kubenswrapper[4828]: I1205 19:25:07.929796 4828 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:07 crc kubenswrapper[4828]: I1205 19:25:07.929857 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:07 crc kubenswrapper[4828]: I1205 19:25:07.929875 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:07 crc kubenswrapper[4828]: I1205 19:25:07.929888 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:07 crc kubenswrapper[4828]: I1205 19:25:07.932055 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-config" (OuterVolumeSpecName: "config") pod "e5ac966d-0aae-4f8f-a38b-2debce3a8e64" (UID: "e5ac966d-0aae-4f8f-a38b-2debce3a8e64"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:25:08 crc kubenswrapper[4828]: I1205 19:25:08.031600 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5ac966d-0aae-4f8f-a38b-2debce3a8e64-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:08 crc kubenswrapper[4828]: I1205 19:25:08.187353 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-596dd8d85b-r59nh" Dec 05 19:25:08 crc kubenswrapper[4828]: I1205 19:25:08.190449 4828 generic.go:334] "Generic (PLEG): container finished" podID="a63cd477-b66e-4a12-bec5-a5a39b37b0eb" containerID="ffba588f50d123a22a4afb599fe96b324a8ae164d939dbf470876d79316010b4" exitCode=0 Dec 05 19:25:08 crc kubenswrapper[4828]: I1205 19:25:08.190557 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a63cd477-b66e-4a12-bec5-a5a39b37b0eb","Type":"ContainerDied","Data":"ffba588f50d123a22a4afb599fe96b324a8ae164d939dbf470876d79316010b4"} Dec 05 19:25:08 crc kubenswrapper[4828]: I1205 19:25:08.200775 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" Dec 05 19:25:08 crc kubenswrapper[4828]: I1205 19:25:08.201061 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-xk8rt" event={"ID":"e5ac966d-0aae-4f8f-a38b-2debce3a8e64","Type":"ContainerDied","Data":"145602531f3b887683b01a346401fada53871fdd879deef6382e343a73f56727"} Dec 05 19:25:08 crc kubenswrapper[4828]: I1205 19:25:08.201250 4828 scope.go:117] "RemoveContainer" containerID="ba2fdb4af29467ade541b486f75fe8d0339ef5335c74e270382607186258287b" Dec 05 19:25:08 crc kubenswrapper[4828]: I1205 19:25:08.339560 4828 scope.go:117] "RemoveContainer" containerID="7ae90cf6690274ea287c9ef115d59521397089e7ad3294402b51921eea17c098" Dec 05 19:25:08 crc kubenswrapper[4828]: I1205 19:25:08.381909 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-xk8rt"] Dec 05 19:25:08 crc kubenswrapper[4828]: I1205 19:25:08.387191 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-xk8rt"] Dec 05 19:25:08 crc kubenswrapper[4828]: I1205 19:25:08.483675 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5ac966d-0aae-4f8f-a38b-2debce3a8e64" path="/var/lib/kubelet/pods/e5ac966d-0aae-4f8f-a38b-2debce3a8e64/volumes" Dec 05 19:25:08 crc kubenswrapper[4828]: I1205 19:25:08.737307 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 19:25:08 crc kubenswrapper[4828]: I1205 19:25:08.759188 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-config-data\") pod \"a63cd477-b66e-4a12-bec5-a5a39b37b0eb\" (UID: \"a63cd477-b66e-4a12-bec5-a5a39b37b0eb\") " Dec 05 19:25:08 crc kubenswrapper[4828]: I1205 19:25:08.759261 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntczv\" (UniqueName: \"kubernetes.io/projected/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-kube-api-access-ntczv\") pod \"a63cd477-b66e-4a12-bec5-a5a39b37b0eb\" (UID: \"a63cd477-b66e-4a12-bec5-a5a39b37b0eb\") " Dec 05 19:25:08 crc kubenswrapper[4828]: I1205 19:25:08.759289 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-combined-ca-bundle\") pod \"a63cd477-b66e-4a12-bec5-a5a39b37b0eb\" (UID: \"a63cd477-b66e-4a12-bec5-a5a39b37b0eb\") " Dec 05 19:25:08 crc kubenswrapper[4828]: I1205 19:25:08.759401 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-scripts\") pod \"a63cd477-b66e-4a12-bec5-a5a39b37b0eb\" (UID: \"a63cd477-b66e-4a12-bec5-a5a39b37b0eb\") " Dec 05 19:25:08 crc kubenswrapper[4828]: I1205 19:25:08.759531 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-config-data-custom\") pod \"a63cd477-b66e-4a12-bec5-a5a39b37b0eb\" (UID: \"a63cd477-b66e-4a12-bec5-a5a39b37b0eb\") " Dec 05 19:25:08 crc kubenswrapper[4828]: I1205 19:25:08.759576 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-etc-machine-id\") pod \"a63cd477-b66e-4a12-bec5-a5a39b37b0eb\" (UID: \"a63cd477-b66e-4a12-bec5-a5a39b37b0eb\") " Dec 05 19:25:08 crc kubenswrapper[4828]: I1205 19:25:08.759979 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a63cd477-b66e-4a12-bec5-a5a39b37b0eb" (UID: "a63cd477-b66e-4a12-bec5-a5a39b37b0eb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 19:25:08 crc kubenswrapper[4828]: I1205 19:25:08.765386 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-kube-api-access-ntczv" (OuterVolumeSpecName: "kube-api-access-ntczv") pod "a63cd477-b66e-4a12-bec5-a5a39b37b0eb" (UID: "a63cd477-b66e-4a12-bec5-a5a39b37b0eb"). InnerVolumeSpecName "kube-api-access-ntczv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:25:08 crc kubenswrapper[4828]: I1205 19:25:08.765998 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a63cd477-b66e-4a12-bec5-a5a39b37b0eb" (UID: "a63cd477-b66e-4a12-bec5-a5a39b37b0eb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:08 crc kubenswrapper[4828]: I1205 19:25:08.806070 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-scripts" (OuterVolumeSpecName: "scripts") pod "a63cd477-b66e-4a12-bec5-a5a39b37b0eb" (UID: "a63cd477-b66e-4a12-bec5-a5a39b37b0eb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:08 crc kubenswrapper[4828]: I1205 19:25:08.848459 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a63cd477-b66e-4a12-bec5-a5a39b37b0eb" (UID: "a63cd477-b66e-4a12-bec5-a5a39b37b0eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:08 crc kubenswrapper[4828]: I1205 19:25:08.867109 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:08 crc kubenswrapper[4828]: I1205 19:25:08.867152 4828 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:08 crc kubenswrapper[4828]: I1205 19:25:08.867166 4828 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:08 crc kubenswrapper[4828]: I1205 19:25:08.867178 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntczv\" (UniqueName: \"kubernetes.io/projected/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-kube-api-access-ntczv\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:08 crc kubenswrapper[4828]: I1205 19:25:08.867191 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:08 crc kubenswrapper[4828]: I1205 19:25:08.890005 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-config-data" (OuterVolumeSpecName: "config-data") pod "a63cd477-b66e-4a12-bec5-a5a39b37b0eb" (UID: "a63cd477-b66e-4a12-bec5-a5a39b37b0eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:08 crc kubenswrapper[4828]: I1205 19:25:08.969086 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a63cd477-b66e-4a12-bec5-a5a39b37b0eb-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.212402 4828 generic.go:334] "Generic (PLEG): container finished" podID="a63cd477-b66e-4a12-bec5-a5a39b37b0eb" containerID="75bad46a06073b85a2e5713d562066b478f6aa039726f3643c2b9c1e9b151f84" exitCode=0 Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.212468 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a63cd477-b66e-4a12-bec5-a5a39b37b0eb","Type":"ContainerDied","Data":"75bad46a06073b85a2e5713d562066b478f6aa039726f3643c2b9c1e9b151f84"} Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.212498 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a63cd477-b66e-4a12-bec5-a5a39b37b0eb","Type":"ContainerDied","Data":"c876678ec513e63435a7f7f7c6d39ad5520c25d24cb44ce56f58ece53e54ae45"} Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.212515 4828 scope.go:117] "RemoveContainer" containerID="75bad46a06073b85a2e5713d562066b478f6aa039726f3643c2b9c1e9b151f84" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.212531 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.237405 4828 scope.go:117] "RemoveContainer" containerID="ffba588f50d123a22a4afb599fe96b324a8ae164d939dbf470876d79316010b4" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.259434 4828 scope.go:117] "RemoveContainer" containerID="75bad46a06073b85a2e5713d562066b478f6aa039726f3643c2b9c1e9b151f84" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.259534 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 19:25:09 crc kubenswrapper[4828]: E1205 19:25:09.261991 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75bad46a06073b85a2e5713d562066b478f6aa039726f3643c2b9c1e9b151f84\": container with ID starting with 75bad46a06073b85a2e5713d562066b478f6aa039726f3643c2b9c1e9b151f84 not found: ID does not exist" containerID="75bad46a06073b85a2e5713d562066b478f6aa039726f3643c2b9c1e9b151f84" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.262026 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75bad46a06073b85a2e5713d562066b478f6aa039726f3643c2b9c1e9b151f84"} err="failed to get container status \"75bad46a06073b85a2e5713d562066b478f6aa039726f3643c2b9c1e9b151f84\": rpc error: code = NotFound desc = could not find container \"75bad46a06073b85a2e5713d562066b478f6aa039726f3643c2b9c1e9b151f84\": container with ID starting with 75bad46a06073b85a2e5713d562066b478f6aa039726f3643c2b9c1e9b151f84 not found: ID does not exist" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.262048 4828 scope.go:117] "RemoveContainer" containerID="ffba588f50d123a22a4afb599fe96b324a8ae164d939dbf470876d79316010b4" Dec 05 19:25:09 crc kubenswrapper[4828]: E1205 19:25:09.263992 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffba588f50d123a22a4afb599fe96b324a8ae164d939dbf470876d79316010b4\": container with ID starting with ffba588f50d123a22a4afb599fe96b324a8ae164d939dbf470876d79316010b4 not found: ID does not exist" containerID="ffba588f50d123a22a4afb599fe96b324a8ae164d939dbf470876d79316010b4" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.264016 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffba588f50d123a22a4afb599fe96b324a8ae164d939dbf470876d79316010b4"} err="failed to get container status \"ffba588f50d123a22a4afb599fe96b324a8ae164d939dbf470876d79316010b4\": rpc error: code = NotFound desc = could not find container \"ffba588f50d123a22a4afb599fe96b324a8ae164d939dbf470876d79316010b4\": container with ID starting with ffba588f50d123a22a4afb599fe96b324a8ae164d939dbf470876d79316010b4 not found: ID does not exist" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.269001 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.288632 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 19:25:09 crc kubenswrapper[4828]: E1205 19:25:09.289257 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a63cd477-b66e-4a12-bec5-a5a39b37b0eb" containerName="cinder-scheduler" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.289275 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="a63cd477-b66e-4a12-bec5-a5a39b37b0eb" containerName="cinder-scheduler" Dec 05 19:25:09 crc kubenswrapper[4828]: E1205 19:25:09.289296 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5ac966d-0aae-4f8f-a38b-2debce3a8e64" containerName="dnsmasq-dns" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.289302 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5ac966d-0aae-4f8f-a38b-2debce3a8e64" containerName="dnsmasq-dns" Dec 05 19:25:09 crc kubenswrapper[4828]: E1205 19:25:09.289325 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a63cd477-b66e-4a12-bec5-a5a39b37b0eb" containerName="probe" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.289330 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="a63cd477-b66e-4a12-bec5-a5a39b37b0eb" containerName="probe" Dec 05 19:25:09 crc kubenswrapper[4828]: E1205 19:25:09.289346 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ada41f83-4947-4d14-a1c1-c1dd44f7d656" containerName="horizon-log" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.289352 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="ada41f83-4947-4d14-a1c1-c1dd44f7d656" containerName="horizon-log" Dec 05 19:25:09 crc kubenswrapper[4828]: E1205 19:25:09.289364 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5ac966d-0aae-4f8f-a38b-2debce3a8e64" containerName="init" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.289371 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5ac966d-0aae-4f8f-a38b-2debce3a8e64" containerName="init" Dec 05 19:25:09 crc kubenswrapper[4828]: E1205 19:25:09.289385 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ada41f83-4947-4d14-a1c1-c1dd44f7d656" containerName="horizon" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.289391 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="ada41f83-4947-4d14-a1c1-c1dd44f7d656" containerName="horizon" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.289552 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="a63cd477-b66e-4a12-bec5-a5a39b37b0eb" containerName="probe" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.289569 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="ada41f83-4947-4d14-a1c1-c1dd44f7d656" containerName="horizon-log" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.289584 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5ac966d-0aae-4f8f-a38b-2debce3a8e64" containerName="dnsmasq-dns" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.289595 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="a63cd477-b66e-4a12-bec5-a5a39b37b0eb" containerName="cinder-scheduler" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.289607 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="ada41f83-4947-4d14-a1c1-c1dd44f7d656" containerName="horizon" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.293077 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.297223 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.334113 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.380189 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49hm6\" (UniqueName: \"kubernetes.io/projected/714c55c4-ac9a-4e63-8159-04f311676ad5-kube-api-access-49hm6\") pod \"cinder-scheduler-0\" (UID: \"714c55c4-ac9a-4e63-8159-04f311676ad5\") " pod="openstack/cinder-scheduler-0" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.380272 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/714c55c4-ac9a-4e63-8159-04f311676ad5-config-data\") pod \"cinder-scheduler-0\" (UID: \"714c55c4-ac9a-4e63-8159-04f311676ad5\") " pod="openstack/cinder-scheduler-0" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.380319 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/714c55c4-ac9a-4e63-8159-04f311676ad5-scripts\") pod \"cinder-scheduler-0\" (UID: \"714c55c4-ac9a-4e63-8159-04f311676ad5\") " pod="openstack/cinder-scheduler-0" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.380379 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/714c55c4-ac9a-4e63-8159-04f311676ad5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"714c55c4-ac9a-4e63-8159-04f311676ad5\") " pod="openstack/cinder-scheduler-0" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.380411 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/714c55c4-ac9a-4e63-8159-04f311676ad5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"714c55c4-ac9a-4e63-8159-04f311676ad5\") " pod="openstack/cinder-scheduler-0" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.380545 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/714c55c4-ac9a-4e63-8159-04f311676ad5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"714c55c4-ac9a-4e63-8159-04f311676ad5\") " pod="openstack/cinder-scheduler-0" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.481792 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/714c55c4-ac9a-4e63-8159-04f311676ad5-config-data\") pod \"cinder-scheduler-0\" (UID: \"714c55c4-ac9a-4e63-8159-04f311676ad5\") " pod="openstack/cinder-scheduler-0" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.482453 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/714c55c4-ac9a-4e63-8159-04f311676ad5-scripts\") pod \"cinder-scheduler-0\" (UID: \"714c55c4-ac9a-4e63-8159-04f311676ad5\") " pod="openstack/cinder-scheduler-0" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.483061 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/714c55c4-ac9a-4e63-8159-04f311676ad5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"714c55c4-ac9a-4e63-8159-04f311676ad5\") " pod="openstack/cinder-scheduler-0" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.483099 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/714c55c4-ac9a-4e63-8159-04f311676ad5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"714c55c4-ac9a-4e63-8159-04f311676ad5\") " pod="openstack/cinder-scheduler-0" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.483250 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/714c55c4-ac9a-4e63-8159-04f311676ad5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"714c55c4-ac9a-4e63-8159-04f311676ad5\") " pod="openstack/cinder-scheduler-0" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.483363 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49hm6\" (UniqueName: \"kubernetes.io/projected/714c55c4-ac9a-4e63-8159-04f311676ad5-kube-api-access-49hm6\") pod \"cinder-scheduler-0\" (UID: \"714c55c4-ac9a-4e63-8159-04f311676ad5\") " pod="openstack/cinder-scheduler-0" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.486919 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/714c55c4-ac9a-4e63-8159-04f311676ad5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"714c55c4-ac9a-4e63-8159-04f311676ad5\") " pod="openstack/cinder-scheduler-0" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.499064 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/714c55c4-ac9a-4e63-8159-04f311676ad5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"714c55c4-ac9a-4e63-8159-04f311676ad5\") " pod="openstack/cinder-scheduler-0" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.499608 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/714c55c4-ac9a-4e63-8159-04f311676ad5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"714c55c4-ac9a-4e63-8159-04f311676ad5\") " pod="openstack/cinder-scheduler-0" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.511752 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/714c55c4-ac9a-4e63-8159-04f311676ad5-config-data\") pod \"cinder-scheduler-0\" (UID: \"714c55c4-ac9a-4e63-8159-04f311676ad5\") " pod="openstack/cinder-scheduler-0" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.517698 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/714c55c4-ac9a-4e63-8159-04f311676ad5-scripts\") pod \"cinder-scheduler-0\" (UID: \"714c55c4-ac9a-4e63-8159-04f311676ad5\") " pod="openstack/cinder-scheduler-0" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.525324 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49hm6\" (UniqueName: \"kubernetes.io/projected/714c55c4-ac9a-4e63-8159-04f311676ad5-kube-api-access-49hm6\") pod \"cinder-scheduler-0\" (UID: \"714c55c4-ac9a-4e63-8159-04f311676ad5\") " pod="openstack/cinder-scheduler-0" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.619772 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.690428 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6bfcc469f6-vtpj6" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.708483 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6bfcc469f6-vtpj6" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.961242 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-699b69c564-442lb" Dec 05 19:25:09 crc kubenswrapper[4828]: I1205 19:25:09.968247 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-594b9fb44-r9zh6" Dec 05 19:25:10 crc kubenswrapper[4828]: I1205 19:25:10.123534 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-89dcf679f-97rfx" Dec 05 19:25:10 crc kubenswrapper[4828]: I1205 19:25:10.247636 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 19:25:10 crc kubenswrapper[4828]: I1205 19:25:10.513666 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a63cd477-b66e-4a12-bec5-a5a39b37b0eb" path="/var/lib/kubelet/pods/a63cd477-b66e-4a12-bec5-a5a39b37b0eb/volumes" Dec 05 19:25:11 crc kubenswrapper[4828]: I1205 19:25:11.244739 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"714c55c4-ac9a-4e63-8159-04f311676ad5","Type":"ContainerStarted","Data":"057007b669526ac7230b04233e1a97b8bcbc786e26a6e48f7de9db21950dc041"} Dec 05 19:25:11 crc kubenswrapper[4828]: I1205 19:25:11.245088 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"714c55c4-ac9a-4e63-8159-04f311676ad5","Type":"ContainerStarted","Data":"0e17f166ee5639189deb056a321b572af4d230444067982d984db0c4b5e2a08d"} Dec 05 19:25:11 crc kubenswrapper[4828]: I1205 19:25:11.430969 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6dc6b5c7cf-8mlhh" Dec 05 19:25:11 crc kubenswrapper[4828]: I1205 19:25:11.513109 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-596dd8d85b-r59nh"] Dec 05 19:25:11 crc kubenswrapper[4828]: I1205 19:25:11.513339 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-596dd8d85b-r59nh" podUID="a89115ab-f300-433f-934e-dce679bf1877" containerName="neutron-api" containerID="cri-o://8b38dbdcc9480ed1ff926da8258e3e3ea50c6e23206ed3a90de306872e659f34" gracePeriod=30 Dec 05 19:25:11 crc kubenswrapper[4828]: I1205 19:25:11.513520 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-596dd8d85b-r59nh" podUID="a89115ab-f300-433f-934e-dce679bf1877" containerName="neutron-httpd" containerID="cri-o://66beaa93c902cca0bf08c8dbbf7ea351d4526dadc4e93efeaefa2d3e1b495cf1" gracePeriod=30 Dec 05 19:25:12 crc kubenswrapper[4828]: E1205 19:25:12.164426 4828 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda89115ab_f300_433f_934e_dce679bf1877.slice/crio-66beaa93c902cca0bf08c8dbbf7ea351d4526dadc4e93efeaefa2d3e1b495cf1.scope\": RecentStats: unable to find data in memory cache]" Dec 05 19:25:12 crc kubenswrapper[4828]: I1205 19:25:12.256217 4828 generic.go:334] "Generic (PLEG): container finished" podID="a89115ab-f300-433f-934e-dce679bf1877" containerID="66beaa93c902cca0bf08c8dbbf7ea351d4526dadc4e93efeaefa2d3e1b495cf1" exitCode=0 Dec 05 19:25:12 crc kubenswrapper[4828]: I1205 19:25:12.256320 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-596dd8d85b-r59nh" event={"ID":"a89115ab-f300-433f-934e-dce679bf1877","Type":"ContainerDied","Data":"66beaa93c902cca0bf08c8dbbf7ea351d4526dadc4e93efeaefa2d3e1b495cf1"} Dec 05 19:25:12 crc kubenswrapper[4828]: I1205 19:25:12.259581 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"714c55c4-ac9a-4e63-8159-04f311676ad5","Type":"ContainerStarted","Data":"4c58b452444b1917470022575fa255c7cb9076557fd3dad1a87ab856921ebb20"} Dec 05 19:25:12 crc kubenswrapper[4828]: I1205 19:25:12.283325 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.28330939 podStartE2EDuration="3.28330939s" podCreationTimestamp="2025-12-05 19:25:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:25:12.282480538 +0000 UTC m=+1290.177702844" watchObservedRunningTime="2025-12-05 19:25:12.28330939 +0000 UTC m=+1290.178531696" Dec 05 19:25:12 crc kubenswrapper[4828]: I1205 19:25:12.846534 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 05 19:25:12 crc kubenswrapper[4828]: I1205 19:25:12.858400 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 05 19:25:12 crc kubenswrapper[4828]: I1205 19:25:12.862920 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 05 19:25:12 crc kubenswrapper[4828]: I1205 19:25:12.863229 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 05 19:25:12 crc kubenswrapper[4828]: I1205 19:25:12.868595 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-b4bf9" Dec 05 19:25:12 crc kubenswrapper[4828]: I1205 19:25:12.870598 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 05 19:25:12 crc kubenswrapper[4828]: I1205 19:25:12.946811 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9ckj\" (UniqueName: \"kubernetes.io/projected/847a8779-d691-4659-9166-a8f39abb55f4-kube-api-access-w9ckj\") pod \"openstackclient\" (UID: \"847a8779-d691-4659-9166-a8f39abb55f4\") " pod="openstack/openstackclient" Dec 05 19:25:12 crc kubenswrapper[4828]: I1205 19:25:12.946907 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/847a8779-d691-4659-9166-a8f39abb55f4-openstack-config\") pod \"openstackclient\" (UID: \"847a8779-d691-4659-9166-a8f39abb55f4\") " pod="openstack/openstackclient" Dec 05 19:25:12 crc kubenswrapper[4828]: I1205 19:25:12.946930 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/847a8779-d691-4659-9166-a8f39abb55f4-openstack-config-secret\") pod \"openstackclient\" (UID: \"847a8779-d691-4659-9166-a8f39abb55f4\") " pod="openstack/openstackclient" Dec 05 19:25:12 crc kubenswrapper[4828]: I1205 19:25:12.946990 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847a8779-d691-4659-9166-a8f39abb55f4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"847a8779-d691-4659-9166-a8f39abb55f4\") " pod="openstack/openstackclient" Dec 05 19:25:13 crc kubenswrapper[4828]: I1205 19:25:13.048275 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9ckj\" (UniqueName: \"kubernetes.io/projected/847a8779-d691-4659-9166-a8f39abb55f4-kube-api-access-w9ckj\") pod \"openstackclient\" (UID: \"847a8779-d691-4659-9166-a8f39abb55f4\") " pod="openstack/openstackclient" Dec 05 19:25:13 crc kubenswrapper[4828]: I1205 19:25:13.048343 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/847a8779-d691-4659-9166-a8f39abb55f4-openstack-config\") pod \"openstackclient\" (UID: \"847a8779-d691-4659-9166-a8f39abb55f4\") " pod="openstack/openstackclient" Dec 05 19:25:13 crc kubenswrapper[4828]: I1205 19:25:13.048360 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/847a8779-d691-4659-9166-a8f39abb55f4-openstack-config-secret\") pod \"openstackclient\" (UID: \"847a8779-d691-4659-9166-a8f39abb55f4\") " pod="openstack/openstackclient" Dec 05 19:25:13 crc kubenswrapper[4828]: I1205 19:25:13.048428 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847a8779-d691-4659-9166-a8f39abb55f4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"847a8779-d691-4659-9166-a8f39abb55f4\") " pod="openstack/openstackclient" Dec 05 19:25:13 crc kubenswrapper[4828]: I1205 19:25:13.051109 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/847a8779-d691-4659-9166-a8f39abb55f4-openstack-config\") pod \"openstackclient\" (UID: \"847a8779-d691-4659-9166-a8f39abb55f4\") " pod="openstack/openstackclient" Dec 05 19:25:13 crc kubenswrapper[4828]: I1205 19:25:13.058401 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/847a8779-d691-4659-9166-a8f39abb55f4-openstack-config-secret\") pod \"openstackclient\" (UID: \"847a8779-d691-4659-9166-a8f39abb55f4\") " pod="openstack/openstackclient" Dec 05 19:25:13 crc kubenswrapper[4828]: I1205 19:25:13.063881 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9ckj\" (UniqueName: \"kubernetes.io/projected/847a8779-d691-4659-9166-a8f39abb55f4-kube-api-access-w9ckj\") pod \"openstackclient\" (UID: \"847a8779-d691-4659-9166-a8f39abb55f4\") " pod="openstack/openstackclient" Dec 05 19:25:13 crc kubenswrapper[4828]: I1205 19:25:13.064301 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847a8779-d691-4659-9166-a8f39abb55f4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"847a8779-d691-4659-9166-a8f39abb55f4\") " pod="openstack/openstackclient" Dec 05 19:25:13 crc kubenswrapper[4828]: I1205 19:25:13.099573 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-594b9fb44-r9zh6" Dec 05 19:25:13 crc kubenswrapper[4828]: I1205 19:25:13.154947 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-699b69c564-442lb" Dec 05 19:25:13 crc kubenswrapper[4828]: I1205 19:25:13.204577 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 05 19:25:13 crc kubenswrapper[4828]: I1205 19:25:13.263842 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-699b69c564-442lb"] Dec 05 19:25:13 crc kubenswrapper[4828]: I1205 19:25:13.309438 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-699b69c564-442lb" podUID="74df4612-463b-4b3c-8f2d-7dbb9494d6fe" containerName="horizon-log" containerID="cri-o://389a2d18b31e186a7ee496a4e107afa7e6779b1f1e859afe1edb6a6e9265ec50" gracePeriod=30 Dec 05 19:25:13 crc kubenswrapper[4828]: I1205 19:25:13.309530 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-699b69c564-442lb" podUID="74df4612-463b-4b3c-8f2d-7dbb9494d6fe" containerName="horizon" containerID="cri-o://fa40e13fba0df0483b771e91be21b66eca689a8961b6c27a37b0b215fbdc56b9" gracePeriod=30 Dec 05 19:25:13 crc kubenswrapper[4828]: I1205 19:25:13.846841 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 05 19:25:14 crc kubenswrapper[4828]: I1205 19:25:14.319049 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"847a8779-d691-4659-9166-a8f39abb55f4","Type":"ContainerStarted","Data":"11998e4d224c07b565bf1f4ccd7b489f11ca0f63e30e6f4ed3b85bb15d99ead9"} Dec 05 19:25:14 crc kubenswrapper[4828]: I1205 19:25:14.619871 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 05 19:25:15 crc kubenswrapper[4828]: I1205 19:25:15.665660 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8769b7dc8-87tcr" Dec 05 19:25:16 crc kubenswrapper[4828]: I1205 19:25:16.297020 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 05 19:25:16 crc kubenswrapper[4828]: I1205 19:25:16.434765 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8769b7dc8-87tcr" Dec 05 19:25:16 crc kubenswrapper[4828]: I1205 19:25:16.523182 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-69cd8bdf78-r2wnx"] Dec 05 19:25:16 crc kubenswrapper[4828]: I1205 19:25:16.523736 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-69cd8bdf78-r2wnx" podUID="fbf14167-7355-4185-b0c5-8258f1c43132" containerName="barbican-api-log" containerID="cri-o://c1ccc96a183ec3c2119ee16c420d1b5faad97f1a573b5879a157bc3f55e86fd0" gracePeriod=30 Dec 05 19:25:16 crc kubenswrapper[4828]: I1205 19:25:16.524111 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-69cd8bdf78-r2wnx" podUID="fbf14167-7355-4185-b0c5-8258f1c43132" containerName="barbican-api" containerID="cri-o://e8fa3f5a40f1a00fea8320b7c16eff1e244f0ea27a018237d1f44c20da6ed6d7" gracePeriod=30 Dec 05 19:25:16 crc kubenswrapper[4828]: I1205 19:25:16.565980 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-699b69c564-442lb" podUID="74df4612-463b-4b3c-8f2d-7dbb9494d6fe" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:56758->10.217.0.145:8443: read: connection reset by peer" Dec 05 19:25:17 crc kubenswrapper[4828]: I1205 19:25:17.446383 4828 generic.go:334] "Generic (PLEG): container finished" podID="fbf14167-7355-4185-b0c5-8258f1c43132" containerID="c1ccc96a183ec3c2119ee16c420d1b5faad97f1a573b5879a157bc3f55e86fd0" exitCode=143 Dec 05 19:25:17 crc kubenswrapper[4828]: I1205 19:25:17.446542 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69cd8bdf78-r2wnx" event={"ID":"fbf14167-7355-4185-b0c5-8258f1c43132","Type":"ContainerDied","Data":"c1ccc96a183ec3c2119ee16c420d1b5faad97f1a573b5879a157bc3f55e86fd0"} Dec 05 19:25:17 crc kubenswrapper[4828]: I1205 19:25:17.459874 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-699b69c564-442lb" event={"ID":"74df4612-463b-4b3c-8f2d-7dbb9494d6fe","Type":"ContainerDied","Data":"fa40e13fba0df0483b771e91be21b66eca689a8961b6c27a37b0b215fbdc56b9"} Dec 05 19:25:17 crc kubenswrapper[4828]: I1205 19:25:17.460111 4828 generic.go:334] "Generic (PLEG): container finished" podID="74df4612-463b-4b3c-8f2d-7dbb9494d6fe" containerID="fa40e13fba0df0483b771e91be21b66eca689a8961b6c27a37b0b215fbdc56b9" exitCode=0 Dec 05 19:25:19 crc kubenswrapper[4828]: I1205 19:25:19.921184 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 05 19:25:20 crc kubenswrapper[4828]: I1205 19:25:20.308361 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-69cd8bdf78-r2wnx" Dec 05 19:25:20 crc kubenswrapper[4828]: I1205 19:25:20.436660 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbf14167-7355-4185-b0c5-8258f1c43132-combined-ca-bundle\") pod \"fbf14167-7355-4185-b0c5-8258f1c43132\" (UID: \"fbf14167-7355-4185-b0c5-8258f1c43132\") " Dec 05 19:25:20 crc kubenswrapper[4828]: I1205 19:25:20.436836 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d5ls\" (UniqueName: \"kubernetes.io/projected/fbf14167-7355-4185-b0c5-8258f1c43132-kube-api-access-6d5ls\") pod \"fbf14167-7355-4185-b0c5-8258f1c43132\" (UID: \"fbf14167-7355-4185-b0c5-8258f1c43132\") " Dec 05 19:25:20 crc kubenswrapper[4828]: I1205 19:25:20.436907 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbf14167-7355-4185-b0c5-8258f1c43132-config-data\") pod \"fbf14167-7355-4185-b0c5-8258f1c43132\" (UID: \"fbf14167-7355-4185-b0c5-8258f1c43132\") " Dec 05 19:25:20 crc kubenswrapper[4828]: I1205 19:25:20.436981 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbf14167-7355-4185-b0c5-8258f1c43132-logs\") pod \"fbf14167-7355-4185-b0c5-8258f1c43132\" (UID: \"fbf14167-7355-4185-b0c5-8258f1c43132\") " Dec 05 19:25:20 crc kubenswrapper[4828]: I1205 19:25:20.437093 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbf14167-7355-4185-b0c5-8258f1c43132-config-data-custom\") pod \"fbf14167-7355-4185-b0c5-8258f1c43132\" (UID: \"fbf14167-7355-4185-b0c5-8258f1c43132\") " Dec 05 19:25:20 crc kubenswrapper[4828]: I1205 19:25:20.438344 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbf14167-7355-4185-b0c5-8258f1c43132-logs" (OuterVolumeSpecName: "logs") pod "fbf14167-7355-4185-b0c5-8258f1c43132" (UID: "fbf14167-7355-4185-b0c5-8258f1c43132"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:25:20 crc kubenswrapper[4828]: I1205 19:25:20.444970 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbf14167-7355-4185-b0c5-8258f1c43132-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fbf14167-7355-4185-b0c5-8258f1c43132" (UID: "fbf14167-7355-4185-b0c5-8258f1c43132"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:20 crc kubenswrapper[4828]: I1205 19:25:20.450117 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbf14167-7355-4185-b0c5-8258f1c43132-kube-api-access-6d5ls" (OuterVolumeSpecName: "kube-api-access-6d5ls") pod "fbf14167-7355-4185-b0c5-8258f1c43132" (UID: "fbf14167-7355-4185-b0c5-8258f1c43132"). InnerVolumeSpecName "kube-api-access-6d5ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:25:20 crc kubenswrapper[4828]: I1205 19:25:20.491153 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbf14167-7355-4185-b0c5-8258f1c43132-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fbf14167-7355-4185-b0c5-8258f1c43132" (UID: "fbf14167-7355-4185-b0c5-8258f1c43132"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:20 crc kubenswrapper[4828]: I1205 19:25:20.509949 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbf14167-7355-4185-b0c5-8258f1c43132-config-data" (OuterVolumeSpecName: "config-data") pod "fbf14167-7355-4185-b0c5-8258f1c43132" (UID: "fbf14167-7355-4185-b0c5-8258f1c43132"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:20 crc kubenswrapper[4828]: I1205 19:25:20.525255 4828 generic.go:334] "Generic (PLEG): container finished" podID="fbf14167-7355-4185-b0c5-8258f1c43132" containerID="e8fa3f5a40f1a00fea8320b7c16eff1e244f0ea27a018237d1f44c20da6ed6d7" exitCode=0 Dec 05 19:25:20 crc kubenswrapper[4828]: I1205 19:25:20.525308 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69cd8bdf78-r2wnx" event={"ID":"fbf14167-7355-4185-b0c5-8258f1c43132","Type":"ContainerDied","Data":"e8fa3f5a40f1a00fea8320b7c16eff1e244f0ea27a018237d1f44c20da6ed6d7"} Dec 05 19:25:20 crc kubenswrapper[4828]: I1205 19:25:20.525337 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69cd8bdf78-r2wnx" event={"ID":"fbf14167-7355-4185-b0c5-8258f1c43132","Type":"ContainerDied","Data":"7da24fe1a2bb54255f82eb17e5bd50b40858b87e1880cefdb534218700fd9ccb"} Dec 05 19:25:20 crc kubenswrapper[4828]: I1205 19:25:20.525354 4828 scope.go:117] "RemoveContainer" containerID="e8fa3f5a40f1a00fea8320b7c16eff1e244f0ea27a018237d1f44c20da6ed6d7" Dec 05 19:25:20 crc kubenswrapper[4828]: I1205 19:25:20.525482 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-69cd8bdf78-r2wnx" Dec 05 19:25:20 crc kubenswrapper[4828]: I1205 19:25:20.538992 4828 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbf14167-7355-4185-b0c5-8258f1c43132-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:20 crc kubenswrapper[4828]: I1205 19:25:20.539049 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbf14167-7355-4185-b0c5-8258f1c43132-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:20 crc kubenswrapper[4828]: I1205 19:25:20.539060 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6d5ls\" (UniqueName: \"kubernetes.io/projected/fbf14167-7355-4185-b0c5-8258f1c43132-kube-api-access-6d5ls\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:20 crc kubenswrapper[4828]: I1205 19:25:20.539069 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbf14167-7355-4185-b0c5-8258f1c43132-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:20 crc kubenswrapper[4828]: I1205 19:25:20.539078 4828 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbf14167-7355-4185-b0c5-8258f1c43132-logs\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:20 crc kubenswrapper[4828]: I1205 19:25:20.567035 4828 scope.go:117] "RemoveContainer" containerID="c1ccc96a183ec3c2119ee16c420d1b5faad97f1a573b5879a157bc3f55e86fd0" Dec 05 19:25:20 crc kubenswrapper[4828]: I1205 19:25:20.572503 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-69cd8bdf78-r2wnx"] Dec 05 19:25:20 crc kubenswrapper[4828]: I1205 19:25:20.655503 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-69cd8bdf78-r2wnx"] Dec 05 19:25:20 crc kubenswrapper[4828]: I1205 19:25:20.676809 4828 scope.go:117] "RemoveContainer" containerID="e8fa3f5a40f1a00fea8320b7c16eff1e244f0ea27a018237d1f44c20da6ed6d7" Dec 05 19:25:20 crc kubenswrapper[4828]: E1205 19:25:20.677440 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8fa3f5a40f1a00fea8320b7c16eff1e244f0ea27a018237d1f44c20da6ed6d7\": container with ID starting with e8fa3f5a40f1a00fea8320b7c16eff1e244f0ea27a018237d1f44c20da6ed6d7 not found: ID does not exist" containerID="e8fa3f5a40f1a00fea8320b7c16eff1e244f0ea27a018237d1f44c20da6ed6d7" Dec 05 19:25:20 crc kubenswrapper[4828]: I1205 19:25:20.677474 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8fa3f5a40f1a00fea8320b7c16eff1e244f0ea27a018237d1f44c20da6ed6d7"} err="failed to get container status \"e8fa3f5a40f1a00fea8320b7c16eff1e244f0ea27a018237d1f44c20da6ed6d7\": rpc error: code = NotFound desc = could not find container \"e8fa3f5a40f1a00fea8320b7c16eff1e244f0ea27a018237d1f44c20da6ed6d7\": container with ID starting with e8fa3f5a40f1a00fea8320b7c16eff1e244f0ea27a018237d1f44c20da6ed6d7 not found: ID does not exist" Dec 05 19:25:20 crc kubenswrapper[4828]: I1205 19:25:20.677500 4828 scope.go:117] "RemoveContainer" containerID="c1ccc96a183ec3c2119ee16c420d1b5faad97f1a573b5879a157bc3f55e86fd0" Dec 05 19:25:20 crc kubenswrapper[4828]: E1205 19:25:20.680157 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1ccc96a183ec3c2119ee16c420d1b5faad97f1a573b5879a157bc3f55e86fd0\": container with ID starting with c1ccc96a183ec3c2119ee16c420d1b5faad97f1a573b5879a157bc3f55e86fd0 not found: ID does not exist" containerID="c1ccc96a183ec3c2119ee16c420d1b5faad97f1a573b5879a157bc3f55e86fd0" Dec 05 19:25:20 crc kubenswrapper[4828]: I1205 19:25:20.680249 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ccc96a183ec3c2119ee16c420d1b5faad97f1a573b5879a157bc3f55e86fd0"} err="failed to get container status \"c1ccc96a183ec3c2119ee16c420d1b5faad97f1a573b5879a157bc3f55e86fd0\": rpc error: code = NotFound desc = could not find container \"c1ccc96a183ec3c2119ee16c420d1b5faad97f1a573b5879a157bc3f55e86fd0\": container with ID starting with c1ccc96a183ec3c2119ee16c420d1b5faad97f1a573b5879a157bc3f55e86fd0 not found: ID does not exist" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.566314 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7b86fcf7f7-wb4rw"] Dec 05 19:25:21 crc kubenswrapper[4828]: E1205 19:25:21.566796 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbf14167-7355-4185-b0c5-8258f1c43132" containerName="barbican-api-log" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.566813 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbf14167-7355-4185-b0c5-8258f1c43132" containerName="barbican-api-log" Dec 05 19:25:21 crc kubenswrapper[4828]: E1205 19:25:21.566871 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbf14167-7355-4185-b0c5-8258f1c43132" containerName="barbican-api" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.566881 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbf14167-7355-4185-b0c5-8258f1c43132" containerName="barbican-api" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.567088 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbf14167-7355-4185-b0c5-8258f1c43132" containerName="barbican-api-log" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.567116 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbf14167-7355-4185-b0c5-8258f1c43132" containerName="barbican-api" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.568304 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.570616 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.570810 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.570910 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.581705 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7b86fcf7f7-wb4rw"] Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.713152 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x58vr\" (UniqueName: \"kubernetes.io/projected/6cac2917-5dee-4c64-a745-42e811cd735f-kube-api-access-x58vr\") pod \"swift-proxy-7b86fcf7f7-wb4rw\" (UID: \"6cac2917-5dee-4c64-a745-42e811cd735f\") " pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.713733 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cac2917-5dee-4c64-a745-42e811cd735f-combined-ca-bundle\") pod \"swift-proxy-7b86fcf7f7-wb4rw\" (UID: \"6cac2917-5dee-4c64-a745-42e811cd735f\") " pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.713901 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cac2917-5dee-4c64-a745-42e811cd735f-config-data\") pod \"swift-proxy-7b86fcf7f7-wb4rw\" (UID: \"6cac2917-5dee-4c64-a745-42e811cd735f\") " pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.714089 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cac2917-5dee-4c64-a745-42e811cd735f-internal-tls-certs\") pod \"swift-proxy-7b86fcf7f7-wb4rw\" (UID: \"6cac2917-5dee-4c64-a745-42e811cd735f\") " pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.714142 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cac2917-5dee-4c64-a745-42e811cd735f-run-httpd\") pod \"swift-proxy-7b86fcf7f7-wb4rw\" (UID: \"6cac2917-5dee-4c64-a745-42e811cd735f\") " pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.714183 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cac2917-5dee-4c64-a745-42e811cd735f-public-tls-certs\") pod \"swift-proxy-7b86fcf7f7-wb4rw\" (UID: \"6cac2917-5dee-4c64-a745-42e811cd735f\") " pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.714229 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cac2917-5dee-4c64-a745-42e811cd735f-log-httpd\") pod \"swift-proxy-7b86fcf7f7-wb4rw\" (UID: \"6cac2917-5dee-4c64-a745-42e811cd735f\") " pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.714282 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6cac2917-5dee-4c64-a745-42e811cd735f-etc-swift\") pod \"swift-proxy-7b86fcf7f7-wb4rw\" (UID: \"6cac2917-5dee-4c64-a745-42e811cd735f\") " pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.846265 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cac2917-5dee-4c64-a745-42e811cd735f-config-data\") pod \"swift-proxy-7b86fcf7f7-wb4rw\" (UID: \"6cac2917-5dee-4c64-a745-42e811cd735f\") " pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.846309 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cac2917-5dee-4c64-a745-42e811cd735f-internal-tls-certs\") pod \"swift-proxy-7b86fcf7f7-wb4rw\" (UID: \"6cac2917-5dee-4c64-a745-42e811cd735f\") " pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.846327 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cac2917-5dee-4c64-a745-42e811cd735f-run-httpd\") pod \"swift-proxy-7b86fcf7f7-wb4rw\" (UID: \"6cac2917-5dee-4c64-a745-42e811cd735f\") " pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.846346 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cac2917-5dee-4c64-a745-42e811cd735f-public-tls-certs\") pod \"swift-proxy-7b86fcf7f7-wb4rw\" (UID: \"6cac2917-5dee-4c64-a745-42e811cd735f\") " pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.846366 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cac2917-5dee-4c64-a745-42e811cd735f-log-httpd\") pod \"swift-proxy-7b86fcf7f7-wb4rw\" (UID: \"6cac2917-5dee-4c64-a745-42e811cd735f\") " pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.846390 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6cac2917-5dee-4c64-a745-42e811cd735f-etc-swift\") pod \"swift-proxy-7b86fcf7f7-wb4rw\" (UID: \"6cac2917-5dee-4c64-a745-42e811cd735f\") " pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.846447 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x58vr\" (UniqueName: \"kubernetes.io/projected/6cac2917-5dee-4c64-a745-42e811cd735f-kube-api-access-x58vr\") pod \"swift-proxy-7b86fcf7f7-wb4rw\" (UID: \"6cac2917-5dee-4c64-a745-42e811cd735f\") " pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.846495 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cac2917-5dee-4c64-a745-42e811cd735f-combined-ca-bundle\") pod \"swift-proxy-7b86fcf7f7-wb4rw\" (UID: \"6cac2917-5dee-4c64-a745-42e811cd735f\") " pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.849643 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cac2917-5dee-4c64-a745-42e811cd735f-log-httpd\") pod \"swift-proxy-7b86fcf7f7-wb4rw\" (UID: \"6cac2917-5dee-4c64-a745-42e811cd735f\") " pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.849655 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cac2917-5dee-4c64-a745-42e811cd735f-run-httpd\") pod \"swift-proxy-7b86fcf7f7-wb4rw\" (UID: \"6cac2917-5dee-4c64-a745-42e811cd735f\") " pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.852903 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6cac2917-5dee-4c64-a745-42e811cd735f-etc-swift\") pod \"swift-proxy-7b86fcf7f7-wb4rw\" (UID: \"6cac2917-5dee-4c64-a745-42e811cd735f\") " pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.853063 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cac2917-5dee-4c64-a745-42e811cd735f-combined-ca-bundle\") pod \"swift-proxy-7b86fcf7f7-wb4rw\" (UID: \"6cac2917-5dee-4c64-a745-42e811cd735f\") " pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.856272 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cac2917-5dee-4c64-a745-42e811cd735f-config-data\") pod \"swift-proxy-7b86fcf7f7-wb4rw\" (UID: \"6cac2917-5dee-4c64-a745-42e811cd735f\") " pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.868937 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cac2917-5dee-4c64-a745-42e811cd735f-internal-tls-certs\") pod \"swift-proxy-7b86fcf7f7-wb4rw\" (UID: \"6cac2917-5dee-4c64-a745-42e811cd735f\") " pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.869771 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x58vr\" (UniqueName: \"kubernetes.io/projected/6cac2917-5dee-4c64-a745-42e811cd735f-kube-api-access-x58vr\") pod \"swift-proxy-7b86fcf7f7-wb4rw\" (UID: \"6cac2917-5dee-4c64-a745-42e811cd735f\") " pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.881485 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cac2917-5dee-4c64-a745-42e811cd735f-public-tls-certs\") pod \"swift-proxy-7b86fcf7f7-wb4rw\" (UID: \"6cac2917-5dee-4c64-a745-42e811cd735f\") " pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" Dec 05 19:25:21 crc kubenswrapper[4828]: I1205 19:25:21.887254 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" Dec 05 19:25:22 crc kubenswrapper[4828]: I1205 19:25:22.519315 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbf14167-7355-4185-b0c5-8258f1c43132" path="/var/lib/kubelet/pods/fbf14167-7355-4185-b0c5-8258f1c43132/volumes" Dec 05 19:25:23 crc kubenswrapper[4828]: I1205 19:25:23.548120 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:25:23 crc kubenswrapper[4828]: I1205 19:25:23.548620 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d665a74-59df-4fd9-924b-c082280c3f13" containerName="ceilometer-central-agent" containerID="cri-o://851363f5f50f0d558604f1626f43f5cc900dd0bfd2649b3c5b52dcb0d8593f9b" gracePeriod=30 Dec 05 19:25:23 crc kubenswrapper[4828]: I1205 19:25:23.548955 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d665a74-59df-4fd9-924b-c082280c3f13" containerName="proxy-httpd" containerID="cri-o://0619fde435e3394815f01c932a85580772e7d9421c17b9fe4986fcd75a22eb13" gracePeriod=30 Dec 05 19:25:23 crc kubenswrapper[4828]: I1205 19:25:23.549086 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d665a74-59df-4fd9-924b-c082280c3f13" containerName="sg-core" containerID="cri-o://48a2a6b4e9efc18d8ec16540cbdfce06b76bc66827257abc04d239be48058871" gracePeriod=30 Dec 05 19:25:23 crc kubenswrapper[4828]: I1205 19:25:23.549131 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d665a74-59df-4fd9-924b-c082280c3f13" containerName="ceilometer-notification-agent" containerID="cri-o://f5eafbdaa5a9a9ad7b20c2b8e3d88bfe67115458b3a79c1d3ccba19b2895ce29" gracePeriod=30 Dec 05 19:25:23 crc kubenswrapper[4828]: I1205 19:25:23.582610 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="5d665a74-59df-4fd9-924b-c082280c3f13" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 05 19:25:24 crc kubenswrapper[4828]: I1205 19:25:24.590516 4828 generic.go:334] "Generic (PLEG): container finished" podID="5d665a74-59df-4fd9-924b-c082280c3f13" containerID="0619fde435e3394815f01c932a85580772e7d9421c17b9fe4986fcd75a22eb13" exitCode=0 Dec 05 19:25:24 crc kubenswrapper[4828]: I1205 19:25:24.590858 4828 generic.go:334] "Generic (PLEG): container finished" podID="5d665a74-59df-4fd9-924b-c082280c3f13" containerID="48a2a6b4e9efc18d8ec16540cbdfce06b76bc66827257abc04d239be48058871" exitCode=2 Dec 05 19:25:24 crc kubenswrapper[4828]: I1205 19:25:24.590871 4828 generic.go:334] "Generic (PLEG): container finished" podID="5d665a74-59df-4fd9-924b-c082280c3f13" containerID="851363f5f50f0d558604f1626f43f5cc900dd0bfd2649b3c5b52dcb0d8593f9b" exitCode=0 Dec 05 19:25:24 crc kubenswrapper[4828]: I1205 19:25:24.590574 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d665a74-59df-4fd9-924b-c082280c3f13","Type":"ContainerDied","Data":"0619fde435e3394815f01c932a85580772e7d9421c17b9fe4986fcd75a22eb13"} Dec 05 19:25:24 crc kubenswrapper[4828]: I1205 19:25:24.590905 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d665a74-59df-4fd9-924b-c082280c3f13","Type":"ContainerDied","Data":"48a2a6b4e9efc18d8ec16540cbdfce06b76bc66827257abc04d239be48058871"} Dec 05 19:25:24 crc kubenswrapper[4828]: I1205 19:25:24.590922 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d665a74-59df-4fd9-924b-c082280c3f13","Type":"ContainerDied","Data":"851363f5f50f0d558604f1626f43f5cc900dd0bfd2649b3c5b52dcb0d8593f9b"} Dec 05 19:25:26 crc kubenswrapper[4828]: I1205 19:25:26.150856 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-699b69c564-442lb" podUID="74df4612-463b-4b3c-8f2d-7dbb9494d6fe" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 05 19:25:26 crc kubenswrapper[4828]: I1205 19:25:26.611797 4828 generic.go:334] "Generic (PLEG): container finished" podID="5d665a74-59df-4fd9-924b-c082280c3f13" containerID="f5eafbdaa5a9a9ad7b20c2b8e3d88bfe67115458b3a79c1d3ccba19b2895ce29" exitCode=0 Dec 05 19:25:26 crc kubenswrapper[4828]: I1205 19:25:26.612109 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d665a74-59df-4fd9-924b-c082280c3f13","Type":"ContainerDied","Data":"f5eafbdaa5a9a9ad7b20c2b8e3d88bfe67115458b3a79c1d3ccba19b2895ce29"} Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.255953 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.312357 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d665a74-59df-4fd9-924b-c082280c3f13-config-data\") pod \"5d665a74-59df-4fd9-924b-c082280c3f13\" (UID: \"5d665a74-59df-4fd9-924b-c082280c3f13\") " Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.312424 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vk26f\" (UniqueName: \"kubernetes.io/projected/5d665a74-59df-4fd9-924b-c082280c3f13-kube-api-access-vk26f\") pod \"5d665a74-59df-4fd9-924b-c082280c3f13\" (UID: \"5d665a74-59df-4fd9-924b-c082280c3f13\") " Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.312454 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d665a74-59df-4fd9-924b-c082280c3f13-combined-ca-bundle\") pod \"5d665a74-59df-4fd9-924b-c082280c3f13\" (UID: \"5d665a74-59df-4fd9-924b-c082280c3f13\") " Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.312522 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d665a74-59df-4fd9-924b-c082280c3f13-run-httpd\") pod \"5d665a74-59df-4fd9-924b-c082280c3f13\" (UID: \"5d665a74-59df-4fd9-924b-c082280c3f13\") " Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.312563 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d665a74-59df-4fd9-924b-c082280c3f13-log-httpd\") pod \"5d665a74-59df-4fd9-924b-c082280c3f13\" (UID: \"5d665a74-59df-4fd9-924b-c082280c3f13\") " Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.312587 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d665a74-59df-4fd9-924b-c082280c3f13-scripts\") pod \"5d665a74-59df-4fd9-924b-c082280c3f13\" (UID: \"5d665a74-59df-4fd9-924b-c082280c3f13\") " Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.312658 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d665a74-59df-4fd9-924b-c082280c3f13-sg-core-conf-yaml\") pod \"5d665a74-59df-4fd9-924b-c082280c3f13\" (UID: \"5d665a74-59df-4fd9-924b-c082280c3f13\") " Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.314025 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d665a74-59df-4fd9-924b-c082280c3f13-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5d665a74-59df-4fd9-924b-c082280c3f13" (UID: "5d665a74-59df-4fd9-924b-c082280c3f13"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.314170 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d665a74-59df-4fd9-924b-c082280c3f13-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5d665a74-59df-4fd9-924b-c082280c3f13" (UID: "5d665a74-59df-4fd9-924b-c082280c3f13"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.320085 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d665a74-59df-4fd9-924b-c082280c3f13-kube-api-access-vk26f" (OuterVolumeSpecName: "kube-api-access-vk26f") pod "5d665a74-59df-4fd9-924b-c082280c3f13" (UID: "5d665a74-59df-4fd9-924b-c082280c3f13"). InnerVolumeSpecName "kube-api-access-vk26f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.320352 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d665a74-59df-4fd9-924b-c082280c3f13-scripts" (OuterVolumeSpecName: "scripts") pod "5d665a74-59df-4fd9-924b-c082280c3f13" (UID: "5d665a74-59df-4fd9-924b-c082280c3f13"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.325160 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7b86fcf7f7-wb4rw"] Dec 05 19:25:28 crc kubenswrapper[4828]: W1205 19:25:28.333167 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cac2917_5dee_4c64_a745_42e811cd735f.slice/crio-f82ffe6762476566a090f60b9960b9c5b036984cb2ca5c9406adf5e5d713af59 WatchSource:0}: Error finding container f82ffe6762476566a090f60b9960b9c5b036984cb2ca5c9406adf5e5d713af59: Status 404 returned error can't find the container with id f82ffe6762476566a090f60b9960b9c5b036984cb2ca5c9406adf5e5d713af59 Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.350688 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d665a74-59df-4fd9-924b-c082280c3f13-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5d665a74-59df-4fd9-924b-c082280c3f13" (UID: "5d665a74-59df-4fd9-924b-c082280c3f13"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.408013 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d665a74-59df-4fd9-924b-c082280c3f13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d665a74-59df-4fd9-924b-c082280c3f13" (UID: "5d665a74-59df-4fd9-924b-c082280c3f13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.414626 4828 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d665a74-59df-4fd9-924b-c082280c3f13-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.414661 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d665a74-59df-4fd9-924b-c082280c3f13-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.414671 4828 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d665a74-59df-4fd9-924b-c082280c3f13-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.414680 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vk26f\" (UniqueName: \"kubernetes.io/projected/5d665a74-59df-4fd9-924b-c082280c3f13-kube-api-access-vk26f\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.414690 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d665a74-59df-4fd9-924b-c082280c3f13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.414699 4828 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d665a74-59df-4fd9-924b-c082280c3f13-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.429516 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d665a74-59df-4fd9-924b-c082280c3f13-config-data" (OuterVolumeSpecName: "config-data") pod "5d665a74-59df-4fd9-924b-c082280c3f13" (UID: "5d665a74-59df-4fd9-924b-c082280c3f13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.517884 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d665a74-59df-4fd9-924b-c082280c3f13-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.627935 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" event={"ID":"6cac2917-5dee-4c64-a745-42e811cd735f","Type":"ContainerStarted","Data":"df5b5a9dcbaf798b4efbb033596f90ca832386dc249b494f11df9faca17ff120"} Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.627982 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" event={"ID":"6cac2917-5dee-4c64-a745-42e811cd735f","Type":"ContainerStarted","Data":"f82ffe6762476566a090f60b9960b9c5b036984cb2ca5c9406adf5e5d713af59"} Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.630653 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d665a74-59df-4fd9-924b-c082280c3f13","Type":"ContainerDied","Data":"37f7456c67746ede925b5cd1e1a6ceb11e69fbd544a79de25d32d231cd43d6fa"} Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.630708 4828 scope.go:117] "RemoveContainer" containerID="0619fde435e3394815f01c932a85580772e7d9421c17b9fe4986fcd75a22eb13" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.630718 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.634007 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"847a8779-d691-4659-9166-a8f39abb55f4","Type":"ContainerStarted","Data":"476aa66f02aacde32b24ddd98d60765c4ee888d7fe60a8f355c78bae947df4a8"} Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.660997 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.59348963 podStartE2EDuration="16.660980312s" podCreationTimestamp="2025-12-05 19:25:12 +0000 UTC" firstStartedPulling="2025-12-05 19:25:13.83669519 +0000 UTC m=+1291.731917496" lastFinishedPulling="2025-12-05 19:25:27.904185872 +0000 UTC m=+1305.799408178" observedRunningTime="2025-12-05 19:25:28.659743899 +0000 UTC m=+1306.554966225" watchObservedRunningTime="2025-12-05 19:25:28.660980312 +0000 UTC m=+1306.556202608" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.673495 4828 scope.go:117] "RemoveContainer" containerID="48a2a6b4e9efc18d8ec16540cbdfce06b76bc66827257abc04d239be48058871" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.691929 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.701386 4828 scope.go:117] "RemoveContainer" containerID="f5eafbdaa5a9a9ad7b20c2b8e3d88bfe67115458b3a79c1d3ccba19b2895ce29" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.706342 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.726170 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:25:28 crc kubenswrapper[4828]: E1205 19:25:28.727416 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d665a74-59df-4fd9-924b-c082280c3f13" containerName="proxy-httpd" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.727436 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d665a74-59df-4fd9-924b-c082280c3f13" containerName="proxy-httpd" Dec 05 19:25:28 crc kubenswrapper[4828]: E1205 19:25:28.727464 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d665a74-59df-4fd9-924b-c082280c3f13" containerName="ceilometer-notification-agent" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.727470 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d665a74-59df-4fd9-924b-c082280c3f13" containerName="ceilometer-notification-agent" Dec 05 19:25:28 crc kubenswrapper[4828]: E1205 19:25:28.727488 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d665a74-59df-4fd9-924b-c082280c3f13" containerName="sg-core" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.727494 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d665a74-59df-4fd9-924b-c082280c3f13" containerName="sg-core" Dec 05 19:25:28 crc kubenswrapper[4828]: E1205 19:25:28.727505 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d665a74-59df-4fd9-924b-c082280c3f13" containerName="ceilometer-central-agent" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.727510 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d665a74-59df-4fd9-924b-c082280c3f13" containerName="ceilometer-central-agent" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.727666 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d665a74-59df-4fd9-924b-c082280c3f13" containerName="proxy-httpd" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.727683 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d665a74-59df-4fd9-924b-c082280c3f13" containerName="ceilometer-central-agent" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.727698 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d665a74-59df-4fd9-924b-c082280c3f13" containerName="ceilometer-notification-agent" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.727711 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d665a74-59df-4fd9-924b-c082280c3f13" containerName="sg-core" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.729304 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.732702 4828 scope.go:117] "RemoveContainer" containerID="851363f5f50f0d558604f1626f43f5cc900dd0bfd2649b3c5b52dcb0d8593f9b" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.735669 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.737996 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.738569 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.824404 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c12050be-f677-4055-9598-622dbd342f0b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c12050be-f677-4055-9598-622dbd342f0b\") " pod="openstack/ceilometer-0" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.824462 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c12050be-f677-4055-9598-622dbd342f0b-config-data\") pod \"ceilometer-0\" (UID: \"c12050be-f677-4055-9598-622dbd342f0b\") " pod="openstack/ceilometer-0" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.824742 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c12050be-f677-4055-9598-622dbd342f0b-run-httpd\") pod \"ceilometer-0\" (UID: \"c12050be-f677-4055-9598-622dbd342f0b\") " pod="openstack/ceilometer-0" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.824910 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c12050be-f677-4055-9598-622dbd342f0b-scripts\") pod \"ceilometer-0\" (UID: \"c12050be-f677-4055-9598-622dbd342f0b\") " pod="openstack/ceilometer-0" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.825249 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq9gb\" (UniqueName: \"kubernetes.io/projected/c12050be-f677-4055-9598-622dbd342f0b-kube-api-access-xq9gb\") pod \"ceilometer-0\" (UID: \"c12050be-f677-4055-9598-622dbd342f0b\") " pod="openstack/ceilometer-0" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.825320 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c12050be-f677-4055-9598-622dbd342f0b-log-httpd\") pod \"ceilometer-0\" (UID: \"c12050be-f677-4055-9598-622dbd342f0b\") " pod="openstack/ceilometer-0" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.825405 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c12050be-f677-4055-9598-622dbd342f0b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c12050be-f677-4055-9598-622dbd342f0b\") " pod="openstack/ceilometer-0" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.928154 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c12050be-f677-4055-9598-622dbd342f0b-scripts\") pod \"ceilometer-0\" (UID: \"c12050be-f677-4055-9598-622dbd342f0b\") " pod="openstack/ceilometer-0" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.928216 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq9gb\" (UniqueName: \"kubernetes.io/projected/c12050be-f677-4055-9598-622dbd342f0b-kube-api-access-xq9gb\") pod \"ceilometer-0\" (UID: \"c12050be-f677-4055-9598-622dbd342f0b\") " pod="openstack/ceilometer-0" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.928258 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c12050be-f677-4055-9598-622dbd342f0b-log-httpd\") pod \"ceilometer-0\" (UID: \"c12050be-f677-4055-9598-622dbd342f0b\") " pod="openstack/ceilometer-0" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.928301 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c12050be-f677-4055-9598-622dbd342f0b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c12050be-f677-4055-9598-622dbd342f0b\") " pod="openstack/ceilometer-0" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.928356 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c12050be-f677-4055-9598-622dbd342f0b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c12050be-f677-4055-9598-622dbd342f0b\") " pod="openstack/ceilometer-0" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.928391 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c12050be-f677-4055-9598-622dbd342f0b-config-data\") pod \"ceilometer-0\" (UID: \"c12050be-f677-4055-9598-622dbd342f0b\") " pod="openstack/ceilometer-0" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.928455 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c12050be-f677-4055-9598-622dbd342f0b-run-httpd\") pod \"ceilometer-0\" (UID: \"c12050be-f677-4055-9598-622dbd342f0b\") " pod="openstack/ceilometer-0" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.928985 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c12050be-f677-4055-9598-622dbd342f0b-log-httpd\") pod \"ceilometer-0\" (UID: \"c12050be-f677-4055-9598-622dbd342f0b\") " pod="openstack/ceilometer-0" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.929044 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c12050be-f677-4055-9598-622dbd342f0b-run-httpd\") pod \"ceilometer-0\" (UID: \"c12050be-f677-4055-9598-622dbd342f0b\") " pod="openstack/ceilometer-0" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.933372 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c12050be-f677-4055-9598-622dbd342f0b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c12050be-f677-4055-9598-622dbd342f0b\") " pod="openstack/ceilometer-0" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.933441 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c12050be-f677-4055-9598-622dbd342f0b-scripts\") pod \"ceilometer-0\" (UID: \"c12050be-f677-4055-9598-622dbd342f0b\") " pod="openstack/ceilometer-0" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.937606 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c12050be-f677-4055-9598-622dbd342f0b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c12050be-f677-4055-9598-622dbd342f0b\") " pod="openstack/ceilometer-0" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.937657 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c12050be-f677-4055-9598-622dbd342f0b-config-data\") pod \"ceilometer-0\" (UID: \"c12050be-f677-4055-9598-622dbd342f0b\") " pod="openstack/ceilometer-0" Dec 05 19:25:28 crc kubenswrapper[4828]: I1205 19:25:28.959490 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq9gb\" (UniqueName: \"kubernetes.io/projected/c12050be-f677-4055-9598-622dbd342f0b-kube-api-access-xq9gb\") pod \"ceilometer-0\" (UID: \"c12050be-f677-4055-9598-622dbd342f0b\") " pod="openstack/ceilometer-0" Dec 05 19:25:29 crc kubenswrapper[4828]: I1205 19:25:29.053237 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 19:25:29 crc kubenswrapper[4828]: I1205 19:25:29.507676 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:25:29 crc kubenswrapper[4828]: W1205 19:25:29.510054 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc12050be_f677_4055_9598_622dbd342f0b.slice/crio-9312a97afc76ae1ec6be8be33bf4320dfb9d57c5992d6c208ab953b5242e3adf WatchSource:0}: Error finding container 9312a97afc76ae1ec6be8be33bf4320dfb9d57c5992d6c208ab953b5242e3adf: Status 404 returned error can't find the container with id 9312a97afc76ae1ec6be8be33bf4320dfb9d57c5992d6c208ab953b5242e3adf Dec 05 19:25:29 crc kubenswrapper[4828]: I1205 19:25:29.646417 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c12050be-f677-4055-9598-622dbd342f0b","Type":"ContainerStarted","Data":"9312a97afc76ae1ec6be8be33bf4320dfb9d57c5992d6c208ab953b5242e3adf"} Dec 05 19:25:29 crc kubenswrapper[4828]: I1205 19:25:29.648279 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" event={"ID":"6cac2917-5dee-4c64-a745-42e811cd735f","Type":"ContainerStarted","Data":"2366b2d7d3e43c2ed1602679ba1b11a0a6382803ea0502001a4610051380a571"} Dec 05 19:25:29 crc kubenswrapper[4828]: I1205 19:25:29.648428 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" Dec 05 19:25:29 crc kubenswrapper[4828]: I1205 19:25:29.675871 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" podStartSLOduration=8.675853823 podStartE2EDuration="8.675853823s" podCreationTimestamp="2025-12-05 19:25:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:25:29.669983034 +0000 UTC m=+1307.565205330" watchObservedRunningTime="2025-12-05 19:25:29.675853823 +0000 UTC m=+1307.571076129" Dec 05 19:25:30 crc kubenswrapper[4828]: I1205 19:25:30.458746 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d665a74-59df-4fd9-924b-c082280c3f13" path="/var/lib/kubelet/pods/5d665a74-59df-4fd9-924b-c082280c3f13/volumes" Dec 05 19:25:30 crc kubenswrapper[4828]: I1205 19:25:30.665739 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c12050be-f677-4055-9598-622dbd342f0b","Type":"ContainerStarted","Data":"0dcafed26135bdb5d139ef90c3529cb42340c19e505c057907cf3dd4ed1f28e0"} Dec 05 19:25:30 crc kubenswrapper[4828]: I1205 19:25:30.665937 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" Dec 05 19:25:30 crc kubenswrapper[4828]: I1205 19:25:30.973027 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:25:31 crc kubenswrapper[4828]: I1205 19:25:31.679411 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c12050be-f677-4055-9598-622dbd342f0b","Type":"ContainerStarted","Data":"db70aaf12e8b69f46917d175863ee19ad99ed95e359a25874a637abd2870d13f"} Dec 05 19:25:32 crc kubenswrapper[4828]: I1205 19:25:32.691527 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c12050be-f677-4055-9598-622dbd342f0b","Type":"ContainerStarted","Data":"d43195c1bfd413cf1a5c650660095972bf86b7a44c9ffd333551ddb83ccb513e"} Dec 05 19:25:33 crc kubenswrapper[4828]: I1205 19:25:33.704561 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c12050be-f677-4055-9598-622dbd342f0b","Type":"ContainerStarted","Data":"66db806a15c1761aa1c394b6db821ffdeb245f00e004da1c0bdd88dfc74ab8db"} Dec 05 19:25:33 crc kubenswrapper[4828]: I1205 19:25:33.705235 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c12050be-f677-4055-9598-622dbd342f0b" containerName="ceilometer-central-agent" containerID="cri-o://0dcafed26135bdb5d139ef90c3529cb42340c19e505c057907cf3dd4ed1f28e0" gracePeriod=30 Dec 05 19:25:33 crc kubenswrapper[4828]: I1205 19:25:33.705315 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 19:25:33 crc kubenswrapper[4828]: I1205 19:25:33.705619 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c12050be-f677-4055-9598-622dbd342f0b" containerName="proxy-httpd" containerID="cri-o://66db806a15c1761aa1c394b6db821ffdeb245f00e004da1c0bdd88dfc74ab8db" gracePeriod=30 Dec 05 19:25:33 crc kubenswrapper[4828]: I1205 19:25:33.705672 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c12050be-f677-4055-9598-622dbd342f0b" containerName="sg-core" containerID="cri-o://d43195c1bfd413cf1a5c650660095972bf86b7a44c9ffd333551ddb83ccb513e" gracePeriod=30 Dec 05 19:25:33 crc kubenswrapper[4828]: I1205 19:25:33.705721 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c12050be-f677-4055-9598-622dbd342f0b" containerName="ceilometer-notification-agent" containerID="cri-o://db70aaf12e8b69f46917d175863ee19ad99ed95e359a25874a637abd2870d13f" gracePeriod=30 Dec 05 19:25:33 crc kubenswrapper[4828]: I1205 19:25:33.732897 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.379487944 podStartE2EDuration="5.732878041s" podCreationTimestamp="2025-12-05 19:25:28 +0000 UTC" firstStartedPulling="2025-12-05 19:25:29.512565861 +0000 UTC m=+1307.407788167" lastFinishedPulling="2025-12-05 19:25:32.865955958 +0000 UTC m=+1310.761178264" observedRunningTime="2025-12-05 19:25:33.726377264 +0000 UTC m=+1311.621599580" watchObservedRunningTime="2025-12-05 19:25:33.732878041 +0000 UTC m=+1311.628100347" Dec 05 19:25:34 crc kubenswrapper[4828]: I1205 19:25:34.714592 4828 generic.go:334] "Generic (PLEG): container finished" podID="c12050be-f677-4055-9598-622dbd342f0b" containerID="66db806a15c1761aa1c394b6db821ffdeb245f00e004da1c0bdd88dfc74ab8db" exitCode=0 Dec 05 19:25:34 crc kubenswrapper[4828]: I1205 19:25:34.714634 4828 generic.go:334] "Generic (PLEG): container finished" podID="c12050be-f677-4055-9598-622dbd342f0b" containerID="d43195c1bfd413cf1a5c650660095972bf86b7a44c9ffd333551ddb83ccb513e" exitCode=2 Dec 05 19:25:34 crc kubenswrapper[4828]: I1205 19:25:34.714643 4828 generic.go:334] "Generic (PLEG): container finished" podID="c12050be-f677-4055-9598-622dbd342f0b" containerID="db70aaf12e8b69f46917d175863ee19ad99ed95e359a25874a637abd2870d13f" exitCode=0 Dec 05 19:25:34 crc kubenswrapper[4828]: I1205 19:25:34.714656 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c12050be-f677-4055-9598-622dbd342f0b","Type":"ContainerDied","Data":"66db806a15c1761aa1c394b6db821ffdeb245f00e004da1c0bdd88dfc74ab8db"} Dec 05 19:25:34 crc kubenswrapper[4828]: I1205 19:25:34.714736 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c12050be-f677-4055-9598-622dbd342f0b","Type":"ContainerDied","Data":"d43195c1bfd413cf1a5c650660095972bf86b7a44c9ffd333551ddb83ccb513e"} Dec 05 19:25:34 crc kubenswrapper[4828]: I1205 19:25:34.714749 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c12050be-f677-4055-9598-622dbd342f0b","Type":"ContainerDied","Data":"db70aaf12e8b69f46917d175863ee19ad99ed95e359a25874a637abd2870d13f"} Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.259642 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.259984 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.543165 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.583193 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c12050be-f677-4055-9598-622dbd342f0b-scripts\") pod \"c12050be-f677-4055-9598-622dbd342f0b\" (UID: \"c12050be-f677-4055-9598-622dbd342f0b\") " Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.583254 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c12050be-f677-4055-9598-622dbd342f0b-run-httpd\") pod \"c12050be-f677-4055-9598-622dbd342f0b\" (UID: \"c12050be-f677-4055-9598-622dbd342f0b\") " Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.583296 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c12050be-f677-4055-9598-622dbd342f0b-sg-core-conf-yaml\") pod \"c12050be-f677-4055-9598-622dbd342f0b\" (UID: \"c12050be-f677-4055-9598-622dbd342f0b\") " Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.583349 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c12050be-f677-4055-9598-622dbd342f0b-combined-ca-bundle\") pod \"c12050be-f677-4055-9598-622dbd342f0b\" (UID: \"c12050be-f677-4055-9598-622dbd342f0b\") " Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.583405 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c12050be-f677-4055-9598-622dbd342f0b-config-data\") pod \"c12050be-f677-4055-9598-622dbd342f0b\" (UID: \"c12050be-f677-4055-9598-622dbd342f0b\") " Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.583435 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq9gb\" (UniqueName: \"kubernetes.io/projected/c12050be-f677-4055-9598-622dbd342f0b-kube-api-access-xq9gb\") pod \"c12050be-f677-4055-9598-622dbd342f0b\" (UID: \"c12050be-f677-4055-9598-622dbd342f0b\") " Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.583473 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c12050be-f677-4055-9598-622dbd342f0b-log-httpd\") pod \"c12050be-f677-4055-9598-622dbd342f0b\" (UID: \"c12050be-f677-4055-9598-622dbd342f0b\") " Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.583752 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c12050be-f677-4055-9598-622dbd342f0b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c12050be-f677-4055-9598-622dbd342f0b" (UID: "c12050be-f677-4055-9598-622dbd342f0b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.584049 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c12050be-f677-4055-9598-622dbd342f0b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c12050be-f677-4055-9598-622dbd342f0b" (UID: "c12050be-f677-4055-9598-622dbd342f0b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.584054 4828 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c12050be-f677-4055-9598-622dbd342f0b-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.590290 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c12050be-f677-4055-9598-622dbd342f0b-kube-api-access-xq9gb" (OuterVolumeSpecName: "kube-api-access-xq9gb") pod "c12050be-f677-4055-9598-622dbd342f0b" (UID: "c12050be-f677-4055-9598-622dbd342f0b"). InnerVolumeSpecName "kube-api-access-xq9gb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.597663 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c12050be-f677-4055-9598-622dbd342f0b-scripts" (OuterVolumeSpecName: "scripts") pod "c12050be-f677-4055-9598-622dbd342f0b" (UID: "c12050be-f677-4055-9598-622dbd342f0b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.644649 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c12050be-f677-4055-9598-622dbd342f0b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c12050be-f677-4055-9598-622dbd342f0b" (UID: "c12050be-f677-4055-9598-622dbd342f0b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.686201 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c12050be-f677-4055-9598-622dbd342f0b-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.686666 4828 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c12050be-f677-4055-9598-622dbd342f0b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.686764 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq9gb\" (UniqueName: \"kubernetes.io/projected/c12050be-f677-4055-9598-622dbd342f0b-kube-api-access-xq9gb\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.686844 4828 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c12050be-f677-4055-9598-622dbd342f0b-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.701677 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c12050be-f677-4055-9598-622dbd342f0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c12050be-f677-4055-9598-622dbd342f0b" (UID: "c12050be-f677-4055-9598-622dbd342f0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.726601 4828 generic.go:334] "Generic (PLEG): container finished" podID="c12050be-f677-4055-9598-622dbd342f0b" containerID="0dcafed26135bdb5d139ef90c3529cb42340c19e505c057907cf3dd4ed1f28e0" exitCode=0 Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.726658 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c12050be-f677-4055-9598-622dbd342f0b","Type":"ContainerDied","Data":"0dcafed26135bdb5d139ef90c3529cb42340c19e505c057907cf3dd4ed1f28e0"} Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.726702 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.726726 4828 scope.go:117] "RemoveContainer" containerID="66db806a15c1761aa1c394b6db821ffdeb245f00e004da1c0bdd88dfc74ab8db" Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.726707 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c12050be-f677-4055-9598-622dbd342f0b","Type":"ContainerDied","Data":"9312a97afc76ae1ec6be8be33bf4320dfb9d57c5992d6c208ab953b5242e3adf"} Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.731494 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c12050be-f677-4055-9598-622dbd342f0b-config-data" (OuterVolumeSpecName: "config-data") pod "c12050be-f677-4055-9598-622dbd342f0b" (UID: "c12050be-f677-4055-9598-622dbd342f0b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.754834 4828 scope.go:117] "RemoveContainer" containerID="d43195c1bfd413cf1a5c650660095972bf86b7a44c9ffd333551ddb83ccb513e" Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.771797 4828 scope.go:117] "RemoveContainer" containerID="db70aaf12e8b69f46917d175863ee19ad99ed95e359a25874a637abd2870d13f" Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.789293 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c12050be-f677-4055-9598-622dbd342f0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.789333 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c12050be-f677-4055-9598-622dbd342f0b-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.793432 4828 scope.go:117] "RemoveContainer" containerID="0dcafed26135bdb5d139ef90c3529cb42340c19e505c057907cf3dd4ed1f28e0" Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.810893 4828 scope.go:117] "RemoveContainer" containerID="66db806a15c1761aa1c394b6db821ffdeb245f00e004da1c0bdd88dfc74ab8db" Dec 05 19:25:35 crc kubenswrapper[4828]: E1205 19:25:35.811485 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66db806a15c1761aa1c394b6db821ffdeb245f00e004da1c0bdd88dfc74ab8db\": container with ID starting with 66db806a15c1761aa1c394b6db821ffdeb245f00e004da1c0bdd88dfc74ab8db not found: ID does not exist" containerID="66db806a15c1761aa1c394b6db821ffdeb245f00e004da1c0bdd88dfc74ab8db" Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.811534 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66db806a15c1761aa1c394b6db821ffdeb245f00e004da1c0bdd88dfc74ab8db"} err="failed to get container status \"66db806a15c1761aa1c394b6db821ffdeb245f00e004da1c0bdd88dfc74ab8db\": rpc error: code = NotFound desc = could not find container \"66db806a15c1761aa1c394b6db821ffdeb245f00e004da1c0bdd88dfc74ab8db\": container with ID starting with 66db806a15c1761aa1c394b6db821ffdeb245f00e004da1c0bdd88dfc74ab8db not found: ID does not exist" Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.811563 4828 scope.go:117] "RemoveContainer" containerID="d43195c1bfd413cf1a5c650660095972bf86b7a44c9ffd333551ddb83ccb513e" Dec 05 19:25:35 crc kubenswrapper[4828]: E1205 19:25:35.811913 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d43195c1bfd413cf1a5c650660095972bf86b7a44c9ffd333551ddb83ccb513e\": container with ID starting with d43195c1bfd413cf1a5c650660095972bf86b7a44c9ffd333551ddb83ccb513e not found: ID does not exist" containerID="d43195c1bfd413cf1a5c650660095972bf86b7a44c9ffd333551ddb83ccb513e" Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.811945 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d43195c1bfd413cf1a5c650660095972bf86b7a44c9ffd333551ddb83ccb513e"} err="failed to get container status \"d43195c1bfd413cf1a5c650660095972bf86b7a44c9ffd333551ddb83ccb513e\": rpc error: code = NotFound desc = could not find container \"d43195c1bfd413cf1a5c650660095972bf86b7a44c9ffd333551ddb83ccb513e\": container with ID starting with d43195c1bfd413cf1a5c650660095972bf86b7a44c9ffd333551ddb83ccb513e not found: ID does not exist" Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.811963 4828 scope.go:117] "RemoveContainer" containerID="db70aaf12e8b69f46917d175863ee19ad99ed95e359a25874a637abd2870d13f" Dec 05 19:25:35 crc kubenswrapper[4828]: E1205 19:25:35.812233 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db70aaf12e8b69f46917d175863ee19ad99ed95e359a25874a637abd2870d13f\": container with ID starting with db70aaf12e8b69f46917d175863ee19ad99ed95e359a25874a637abd2870d13f not found: ID does not exist" containerID="db70aaf12e8b69f46917d175863ee19ad99ed95e359a25874a637abd2870d13f" Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.812265 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db70aaf12e8b69f46917d175863ee19ad99ed95e359a25874a637abd2870d13f"} err="failed to get container status \"db70aaf12e8b69f46917d175863ee19ad99ed95e359a25874a637abd2870d13f\": rpc error: code = NotFound desc = could not find container \"db70aaf12e8b69f46917d175863ee19ad99ed95e359a25874a637abd2870d13f\": container with ID starting with db70aaf12e8b69f46917d175863ee19ad99ed95e359a25874a637abd2870d13f not found: ID does not exist" Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.812283 4828 scope.go:117] "RemoveContainer" containerID="0dcafed26135bdb5d139ef90c3529cb42340c19e505c057907cf3dd4ed1f28e0" Dec 05 19:25:35 crc kubenswrapper[4828]: E1205 19:25:35.812590 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dcafed26135bdb5d139ef90c3529cb42340c19e505c057907cf3dd4ed1f28e0\": container with ID starting with 0dcafed26135bdb5d139ef90c3529cb42340c19e505c057907cf3dd4ed1f28e0 not found: ID does not exist" containerID="0dcafed26135bdb5d139ef90c3529cb42340c19e505c057907cf3dd4ed1f28e0" Dec 05 19:25:35 crc kubenswrapper[4828]: I1205 19:25:35.812620 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dcafed26135bdb5d139ef90c3529cb42340c19e505c057907cf3dd4ed1f28e0"} err="failed to get container status \"0dcafed26135bdb5d139ef90c3529cb42340c19e505c057907cf3dd4ed1f28e0\": rpc error: code = NotFound desc = could not find container \"0dcafed26135bdb5d139ef90c3529cb42340c19e505c057907cf3dd4ed1f28e0\": container with ID starting with 0dcafed26135bdb5d139ef90c3529cb42340c19e505c057907cf3dd4ed1f28e0 not found: ID does not exist" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.063054 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.072963 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.089620 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:25:36 crc kubenswrapper[4828]: E1205 19:25:36.090102 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c12050be-f677-4055-9598-622dbd342f0b" containerName="ceilometer-notification-agent" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.090128 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="c12050be-f677-4055-9598-622dbd342f0b" containerName="ceilometer-notification-agent" Dec 05 19:25:36 crc kubenswrapper[4828]: E1205 19:25:36.090158 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c12050be-f677-4055-9598-622dbd342f0b" containerName="sg-core" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.090167 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="c12050be-f677-4055-9598-622dbd342f0b" containerName="sg-core" Dec 05 19:25:36 crc kubenswrapper[4828]: E1205 19:25:36.090180 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c12050be-f677-4055-9598-622dbd342f0b" containerName="proxy-httpd" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.090188 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="c12050be-f677-4055-9598-622dbd342f0b" containerName="proxy-httpd" Dec 05 19:25:36 crc kubenswrapper[4828]: E1205 19:25:36.090199 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c12050be-f677-4055-9598-622dbd342f0b" containerName="ceilometer-central-agent" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.090206 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="c12050be-f677-4055-9598-622dbd342f0b" containerName="ceilometer-central-agent" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.090432 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="c12050be-f677-4055-9598-622dbd342f0b" containerName="ceilometer-notification-agent" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.090463 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="c12050be-f677-4055-9598-622dbd342f0b" containerName="sg-core" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.090478 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="c12050be-f677-4055-9598-622dbd342f0b" containerName="ceilometer-central-agent" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.090495 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="c12050be-f677-4055-9598-622dbd342f0b" containerName="proxy-httpd" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.092973 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.094905 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.100142 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.104042 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.151370 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-699b69c564-442lb" podUID="74df4612-463b-4b3c-8f2d-7dbb9494d6fe" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.163403 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-699b69c564-442lb" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.196041 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e71df0c2-5325-4ded-8ba0-691757e3c7e3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\") " pod="openstack/ceilometer-0" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.196140 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e71df0c2-5325-4ded-8ba0-691757e3c7e3-run-httpd\") pod \"ceilometer-0\" (UID: \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\") " pod="openstack/ceilometer-0" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.196171 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e71df0c2-5325-4ded-8ba0-691757e3c7e3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\") " pod="openstack/ceilometer-0" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.196219 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e71df0c2-5325-4ded-8ba0-691757e3c7e3-config-data\") pod \"ceilometer-0\" (UID: \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\") " pod="openstack/ceilometer-0" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.196351 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28ln8\" (UniqueName: \"kubernetes.io/projected/e71df0c2-5325-4ded-8ba0-691757e3c7e3-kube-api-access-28ln8\") pod \"ceilometer-0\" (UID: \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\") " pod="openstack/ceilometer-0" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.196403 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e71df0c2-5325-4ded-8ba0-691757e3c7e3-scripts\") pod \"ceilometer-0\" (UID: \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\") " pod="openstack/ceilometer-0" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.196436 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e71df0c2-5325-4ded-8ba0-691757e3c7e3-log-httpd\") pod \"ceilometer-0\" (UID: \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\") " pod="openstack/ceilometer-0" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.297901 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e71df0c2-5325-4ded-8ba0-691757e3c7e3-run-httpd\") pod \"ceilometer-0\" (UID: \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\") " pod="openstack/ceilometer-0" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.297948 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e71df0c2-5325-4ded-8ba0-691757e3c7e3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\") " pod="openstack/ceilometer-0" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.297990 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e71df0c2-5325-4ded-8ba0-691757e3c7e3-config-data\") pod \"ceilometer-0\" (UID: \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\") " pod="openstack/ceilometer-0" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.298048 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28ln8\" (UniqueName: \"kubernetes.io/projected/e71df0c2-5325-4ded-8ba0-691757e3c7e3-kube-api-access-28ln8\") pod \"ceilometer-0\" (UID: \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\") " pod="openstack/ceilometer-0" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.298091 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e71df0c2-5325-4ded-8ba0-691757e3c7e3-scripts\") pod \"ceilometer-0\" (UID: \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\") " pod="openstack/ceilometer-0" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.298122 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e71df0c2-5325-4ded-8ba0-691757e3c7e3-log-httpd\") pod \"ceilometer-0\" (UID: \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\") " pod="openstack/ceilometer-0" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.298150 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e71df0c2-5325-4ded-8ba0-691757e3c7e3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\") " pod="openstack/ceilometer-0" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.298407 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e71df0c2-5325-4ded-8ba0-691757e3c7e3-run-httpd\") pod \"ceilometer-0\" (UID: \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\") " pod="openstack/ceilometer-0" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.298648 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e71df0c2-5325-4ded-8ba0-691757e3c7e3-log-httpd\") pod \"ceilometer-0\" (UID: \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\") " pod="openstack/ceilometer-0" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.301976 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e71df0c2-5325-4ded-8ba0-691757e3c7e3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\") " pod="openstack/ceilometer-0" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.303548 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e71df0c2-5325-4ded-8ba0-691757e3c7e3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\") " pod="openstack/ceilometer-0" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.304159 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e71df0c2-5325-4ded-8ba0-691757e3c7e3-scripts\") pod \"ceilometer-0\" (UID: \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\") " pod="openstack/ceilometer-0" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.304512 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e71df0c2-5325-4ded-8ba0-691757e3c7e3-config-data\") pod \"ceilometer-0\" (UID: \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\") " pod="openstack/ceilometer-0" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.321614 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28ln8\" (UniqueName: \"kubernetes.io/projected/e71df0c2-5325-4ded-8ba0-691757e3c7e3-kube-api-access-28ln8\") pod \"ceilometer-0\" (UID: \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\") " pod="openstack/ceilometer-0" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.413124 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-hff7r"] Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.414635 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hff7r" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.444872 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hff7r"] Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.464709 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.474335 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c12050be-f677-4055-9598-622dbd342f0b" path="/var/lib/kubelet/pods/c12050be-f677-4055-9598-622dbd342f0b/volumes" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.502782 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18760301-df61-407a-9aa4-0149bcf012dc-operator-scripts\") pod \"nova-api-db-create-hff7r\" (UID: \"18760301-df61-407a-9aa4-0149bcf012dc\") " pod="openstack/nova-api-db-create-hff7r" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.502961 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwkmt\" (UniqueName: \"kubernetes.io/projected/18760301-df61-407a-9aa4-0149bcf012dc-kube-api-access-nwkmt\") pod \"nova-api-db-create-hff7r\" (UID: \"18760301-df61-407a-9aa4-0149bcf012dc\") " pod="openstack/nova-api-db-create-hff7r" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.526415 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-kw6q7"] Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.527673 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kw6q7" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.545392 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-f06e-account-create-update-dv9jl"] Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.546587 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f06e-account-create-update-dv9jl" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.550546 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.565113 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-kw6q7"] Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.576888 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f06e-account-create-update-dv9jl"] Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.604219 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.604479 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="59e69475-93aa-4875-8997-7cfa85de4b75" containerName="glance-log" containerID="cri-o://439b8fa3d319ffe2baabb3df6f255b4d1f96aa584e83fdf10c89904eba3fbd98" gracePeriod=30 Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.605019 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="59e69475-93aa-4875-8997-7cfa85de4b75" containerName="glance-httpd" containerID="cri-o://86547ad61d25d2149d2bbdfa2485b1b8ec2cd2fd2c5897bf04dd073b9e4c7059" gracePeriod=30 Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.605170 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18760301-df61-407a-9aa4-0149bcf012dc-operator-scripts\") pod \"nova-api-db-create-hff7r\" (UID: \"18760301-df61-407a-9aa4-0149bcf012dc\") " pod="openstack/nova-api-db-create-hff7r" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.605228 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d53c991d-ae7a-4770-a082-a09b5a4340ad-operator-scripts\") pod \"nova-cell0-db-create-kw6q7\" (UID: \"d53c991d-ae7a-4770-a082-a09b5a4340ad\") " pod="openstack/nova-cell0-db-create-kw6q7" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.605276 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm2jc\" (UniqueName: \"kubernetes.io/projected/b2814910-93ec-4703-ba42-d50f50b7be57-kube-api-access-rm2jc\") pod \"nova-api-f06e-account-create-update-dv9jl\" (UID: \"b2814910-93ec-4703-ba42-d50f50b7be57\") " pod="openstack/nova-api-f06e-account-create-update-dv9jl" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.605333 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2814910-93ec-4703-ba42-d50f50b7be57-operator-scripts\") pod \"nova-api-f06e-account-create-update-dv9jl\" (UID: \"b2814910-93ec-4703-ba42-d50f50b7be57\") " pod="openstack/nova-api-f06e-account-create-update-dv9jl" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.605382 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwkmt\" (UniqueName: \"kubernetes.io/projected/18760301-df61-407a-9aa4-0149bcf012dc-kube-api-access-nwkmt\") pod \"nova-api-db-create-hff7r\" (UID: \"18760301-df61-407a-9aa4-0149bcf012dc\") " pod="openstack/nova-api-db-create-hff7r" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.605438 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp4pm\" (UniqueName: \"kubernetes.io/projected/d53c991d-ae7a-4770-a082-a09b5a4340ad-kube-api-access-pp4pm\") pod \"nova-cell0-db-create-kw6q7\" (UID: \"d53c991d-ae7a-4770-a082-a09b5a4340ad\") " pod="openstack/nova-cell0-db-create-kw6q7" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.610746 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18760301-df61-407a-9aa4-0149bcf012dc-operator-scripts\") pod \"nova-api-db-create-hff7r\" (UID: \"18760301-df61-407a-9aa4-0149bcf012dc\") " pod="openstack/nova-api-db-create-hff7r" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.637160 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwkmt\" (UniqueName: \"kubernetes.io/projected/18760301-df61-407a-9aa4-0149bcf012dc-kube-api-access-nwkmt\") pod \"nova-api-db-create-hff7r\" (UID: \"18760301-df61-407a-9aa4-0149bcf012dc\") " pod="openstack/nova-api-db-create-hff7r" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.706700 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d53c991d-ae7a-4770-a082-a09b5a4340ad-operator-scripts\") pod \"nova-cell0-db-create-kw6q7\" (UID: \"d53c991d-ae7a-4770-a082-a09b5a4340ad\") " pod="openstack/nova-cell0-db-create-kw6q7" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.706750 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm2jc\" (UniqueName: \"kubernetes.io/projected/b2814910-93ec-4703-ba42-d50f50b7be57-kube-api-access-rm2jc\") pod \"nova-api-f06e-account-create-update-dv9jl\" (UID: \"b2814910-93ec-4703-ba42-d50f50b7be57\") " pod="openstack/nova-api-f06e-account-create-update-dv9jl" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.706798 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2814910-93ec-4703-ba42-d50f50b7be57-operator-scripts\") pod \"nova-api-f06e-account-create-update-dv9jl\" (UID: \"b2814910-93ec-4703-ba42-d50f50b7be57\") " pod="openstack/nova-api-f06e-account-create-update-dv9jl" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.706858 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp4pm\" (UniqueName: \"kubernetes.io/projected/d53c991d-ae7a-4770-a082-a09b5a4340ad-kube-api-access-pp4pm\") pod \"nova-cell0-db-create-kw6q7\" (UID: \"d53c991d-ae7a-4770-a082-a09b5a4340ad\") " pod="openstack/nova-cell0-db-create-kw6q7" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.707838 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d53c991d-ae7a-4770-a082-a09b5a4340ad-operator-scripts\") pod \"nova-cell0-db-create-kw6q7\" (UID: \"d53c991d-ae7a-4770-a082-a09b5a4340ad\") " pod="openstack/nova-cell0-db-create-kw6q7" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.708400 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2814910-93ec-4703-ba42-d50f50b7be57-operator-scripts\") pod \"nova-api-f06e-account-create-update-dv9jl\" (UID: \"b2814910-93ec-4703-ba42-d50f50b7be57\") " pod="openstack/nova-api-f06e-account-create-update-dv9jl" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.740765 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm2jc\" (UniqueName: \"kubernetes.io/projected/b2814910-93ec-4703-ba42-d50f50b7be57-kube-api-access-rm2jc\") pod \"nova-api-f06e-account-create-update-dv9jl\" (UID: \"b2814910-93ec-4703-ba42-d50f50b7be57\") " pod="openstack/nova-api-f06e-account-create-update-dv9jl" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.745117 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hff7r" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.750107 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-7npt7"] Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.751597 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp4pm\" (UniqueName: \"kubernetes.io/projected/d53c991d-ae7a-4770-a082-a09b5a4340ad-kube-api-access-pp4pm\") pod \"nova-cell0-db-create-kw6q7\" (UID: \"d53c991d-ae7a-4770-a082-a09b5a4340ad\") " pod="openstack/nova-cell0-db-create-kw6q7" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.751655 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7npt7" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.771651 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-e150-account-create-update-nhw6k"] Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.772895 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e150-account-create-update-nhw6k" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.775016 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.795528 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-e150-account-create-update-nhw6k"] Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.839973 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7npt7"] Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.911893 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.921976 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-2e70-account-create-update-xtvw9"] Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.923130 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2e70-account-create-update-xtvw9" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.924050 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7b86fcf7f7-wb4rw" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.931168 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kw6q7" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.932802 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.945835 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-2e70-account-create-update-xtvw9"] Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.953554 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhqk4\" (UniqueName: \"kubernetes.io/projected/0272a767-af95-4be9-a3ca-bcddcf1c9938-kube-api-access-dhqk4\") pod \"nova-cell1-db-create-7npt7\" (UID: \"0272a767-af95-4be9-a3ca-bcddcf1c9938\") " pod="openstack/nova-cell1-db-create-7npt7" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.953598 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0272a767-af95-4be9-a3ca-bcddcf1c9938-operator-scripts\") pod \"nova-cell1-db-create-7npt7\" (UID: \"0272a767-af95-4be9-a3ca-bcddcf1c9938\") " pod="openstack/nova-cell1-db-create-7npt7" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.953642 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swlxv\" (UniqueName: \"kubernetes.io/projected/8eb1a69c-f75a-4899-85d6-504b6a4e7847-kube-api-access-swlxv\") pod \"nova-cell0-e150-account-create-update-nhw6k\" (UID: \"8eb1a69c-f75a-4899-85d6-504b6a4e7847\") " pod="openstack/nova-cell0-e150-account-create-update-nhw6k" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.953705 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eb1a69c-f75a-4899-85d6-504b6a4e7847-operator-scripts\") pod \"nova-cell0-e150-account-create-update-nhw6k\" (UID: \"8eb1a69c-f75a-4899-85d6-504b6a4e7847\") " pod="openstack/nova-cell0-e150-account-create-update-nhw6k" Dec 05 19:25:36 crc kubenswrapper[4828]: I1205 19:25:36.985626 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f06e-account-create-update-dv9jl" Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.055930 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhqk4\" (UniqueName: \"kubernetes.io/projected/0272a767-af95-4be9-a3ca-bcddcf1c9938-kube-api-access-dhqk4\") pod \"nova-cell1-db-create-7npt7\" (UID: \"0272a767-af95-4be9-a3ca-bcddcf1c9938\") " pod="openstack/nova-cell1-db-create-7npt7" Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.056002 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0272a767-af95-4be9-a3ca-bcddcf1c9938-operator-scripts\") pod \"nova-cell1-db-create-7npt7\" (UID: \"0272a767-af95-4be9-a3ca-bcddcf1c9938\") " pod="openstack/nova-cell1-db-create-7npt7" Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.056038 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqlb2\" (UniqueName: \"kubernetes.io/projected/b10629d6-8c01-43e1-bd8c-67a3f4e2d678-kube-api-access-kqlb2\") pod \"nova-cell1-2e70-account-create-update-xtvw9\" (UID: \"b10629d6-8c01-43e1-bd8c-67a3f4e2d678\") " pod="openstack/nova-cell1-2e70-account-create-update-xtvw9" Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.056097 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swlxv\" (UniqueName: \"kubernetes.io/projected/8eb1a69c-f75a-4899-85d6-504b6a4e7847-kube-api-access-swlxv\") pod \"nova-cell0-e150-account-create-update-nhw6k\" (UID: \"8eb1a69c-f75a-4899-85d6-504b6a4e7847\") " pod="openstack/nova-cell0-e150-account-create-update-nhw6k" Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.056188 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eb1a69c-f75a-4899-85d6-504b6a4e7847-operator-scripts\") pod \"nova-cell0-e150-account-create-update-nhw6k\" (UID: \"8eb1a69c-f75a-4899-85d6-504b6a4e7847\") " pod="openstack/nova-cell0-e150-account-create-update-nhw6k" Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.056264 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b10629d6-8c01-43e1-bd8c-67a3f4e2d678-operator-scripts\") pod \"nova-cell1-2e70-account-create-update-xtvw9\" (UID: \"b10629d6-8c01-43e1-bd8c-67a3f4e2d678\") " pod="openstack/nova-cell1-2e70-account-create-update-xtvw9" Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.058260 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eb1a69c-f75a-4899-85d6-504b6a4e7847-operator-scripts\") pod \"nova-cell0-e150-account-create-update-nhw6k\" (UID: \"8eb1a69c-f75a-4899-85d6-504b6a4e7847\") " pod="openstack/nova-cell0-e150-account-create-update-nhw6k" Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.058395 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0272a767-af95-4be9-a3ca-bcddcf1c9938-operator-scripts\") pod \"nova-cell1-db-create-7npt7\" (UID: \"0272a767-af95-4be9-a3ca-bcddcf1c9938\") " pod="openstack/nova-cell1-db-create-7npt7" Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.079642 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhqk4\" (UniqueName: \"kubernetes.io/projected/0272a767-af95-4be9-a3ca-bcddcf1c9938-kube-api-access-dhqk4\") pod \"nova-cell1-db-create-7npt7\" (UID: \"0272a767-af95-4be9-a3ca-bcddcf1c9938\") " pod="openstack/nova-cell1-db-create-7npt7" Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.081520 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swlxv\" (UniqueName: \"kubernetes.io/projected/8eb1a69c-f75a-4899-85d6-504b6a4e7847-kube-api-access-swlxv\") pod \"nova-cell0-e150-account-create-update-nhw6k\" (UID: \"8eb1a69c-f75a-4899-85d6-504b6a4e7847\") " pod="openstack/nova-cell0-e150-account-create-update-nhw6k" Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.130120 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7npt7" Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.153413 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e150-account-create-update-nhw6k" Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.157694 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b10629d6-8c01-43e1-bd8c-67a3f4e2d678-operator-scripts\") pod \"nova-cell1-2e70-account-create-update-xtvw9\" (UID: \"b10629d6-8c01-43e1-bd8c-67a3f4e2d678\") " pod="openstack/nova-cell1-2e70-account-create-update-xtvw9" Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.157907 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqlb2\" (UniqueName: \"kubernetes.io/projected/b10629d6-8c01-43e1-bd8c-67a3f4e2d678-kube-api-access-kqlb2\") pod \"nova-cell1-2e70-account-create-update-xtvw9\" (UID: \"b10629d6-8c01-43e1-bd8c-67a3f4e2d678\") " pod="openstack/nova-cell1-2e70-account-create-update-xtvw9" Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.158415 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b10629d6-8c01-43e1-bd8c-67a3f4e2d678-operator-scripts\") pod \"nova-cell1-2e70-account-create-update-xtvw9\" (UID: \"b10629d6-8c01-43e1-bd8c-67a3f4e2d678\") " pod="openstack/nova-cell1-2e70-account-create-update-xtvw9" Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.180468 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqlb2\" (UniqueName: \"kubernetes.io/projected/b10629d6-8c01-43e1-bd8c-67a3f4e2d678-kube-api-access-kqlb2\") pod \"nova-cell1-2e70-account-create-update-xtvw9\" (UID: \"b10629d6-8c01-43e1-bd8c-67a3f4e2d678\") " pod="openstack/nova-cell1-2e70-account-create-update-xtvw9" Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.200877 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.248776 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2e70-account-create-update-xtvw9" Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.421789 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hff7r"] Dec 05 19:25:37 crc kubenswrapper[4828]: W1205 19:25:37.423546 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18760301_df61_407a_9aa4_0149bcf012dc.slice/crio-6c89474fbc6b3a0702e3d5b46ce6feb0fe12c9009651b7cedb731df46abfd9b6 WatchSource:0}: Error finding container 6c89474fbc6b3a0702e3d5b46ce6feb0fe12c9009651b7cedb731df46abfd9b6: Status 404 returned error can't find the container with id 6c89474fbc6b3a0702e3d5b46ce6feb0fe12c9009651b7cedb731df46abfd9b6 Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.541884 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-kw6q7"] Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.577719 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f06e-account-create-update-dv9jl"] Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.588144 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-e150-account-create-update-nhw6k"] Dec 05 19:25:37 crc kubenswrapper[4828]: W1205 19:25:37.636711 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8eb1a69c_f75a_4899_85d6_504b6a4e7847.slice/crio-68fb746b7d43aaf0f1dce84425ae8c3f81ac187609dda6639f12d3e43ee33ee0 WatchSource:0}: Error finding container 68fb746b7d43aaf0f1dce84425ae8c3f81ac187609dda6639f12d3e43ee33ee0: Status 404 returned error can't find the container with id 68fb746b7d43aaf0f1dce84425ae8c3f81ac187609dda6639f12d3e43ee33ee0 Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.687140 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7npt7"] Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.783765 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f06e-account-create-update-dv9jl" event={"ID":"b2814910-93ec-4703-ba42-d50f50b7be57","Type":"ContainerStarted","Data":"f348f6f279f7fbe0c2054f9033412d5467c44f8ef6d6ac99ae48cb9d0092e5c2"} Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.786783 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kw6q7" event={"ID":"d53c991d-ae7a-4770-a082-a09b5a4340ad","Type":"ContainerStarted","Data":"bf8a10186c360abac9380cbb9bc1e9783778eefaa173063e3b64070dd38d7921"} Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.801748 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hff7r" event={"ID":"18760301-df61-407a-9aa4-0149bcf012dc","Type":"ContainerStarted","Data":"6c89474fbc6b3a0702e3d5b46ce6feb0fe12c9009651b7cedb731df46abfd9b6"} Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.805287 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7npt7" event={"ID":"0272a767-af95-4be9-a3ca-bcddcf1c9938","Type":"ContainerStarted","Data":"39163e1e0238244faff54e0afc0e441b9143fe7c4c4280d3a905e10a322a4037"} Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.819111 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e71df0c2-5325-4ded-8ba0-691757e3c7e3","Type":"ContainerStarted","Data":"c53d461f908bbb208662bc6cc2e126509abc3552fb3659c34480e22b59d77525"} Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.820466 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e150-account-create-update-nhw6k" event={"ID":"8eb1a69c-f75a-4899-85d6-504b6a4e7847","Type":"ContainerStarted","Data":"68fb746b7d43aaf0f1dce84425ae8c3f81ac187609dda6639f12d3e43ee33ee0"} Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.822349 4828 generic.go:334] "Generic (PLEG): container finished" podID="59e69475-93aa-4875-8997-7cfa85de4b75" containerID="439b8fa3d319ffe2baabb3df6f255b4d1f96aa584e83fdf10c89904eba3fbd98" exitCode=143 Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.822677 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"59e69475-93aa-4875-8997-7cfa85de4b75","Type":"ContainerDied","Data":"439b8fa3d319ffe2baabb3df6f255b4d1f96aa584e83fdf10c89904eba3fbd98"} Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.886763 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-2e70-account-create-update-xtvw9"] Dec 05 19:25:37 crc kubenswrapper[4828]: W1205 19:25:37.894138 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb10629d6_8c01_43e1_bd8c_67a3f4e2d678.slice/crio-d28824e936241fc522d4cf811c633dc1ba8ef407f05e3e1e7eab290ae943aee3 WatchSource:0}: Error finding container d28824e936241fc522d4cf811c633dc1ba8ef407f05e3e1e7eab290ae943aee3: Status 404 returned error can't find the container with id d28824e936241fc522d4cf811c633dc1ba8ef407f05e3e1e7eab290ae943aee3 Dec 05 19:25:37 crc kubenswrapper[4828]: I1205 19:25:37.967276 4828 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 19:25:38 crc kubenswrapper[4828]: I1205 19:25:38.176729 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-596dd8d85b-r59nh" podUID="a89115ab-f300-433f-934e-dce679bf1877" containerName="neutron-httpd" probeResult="failure" output="Get \"http://10.217.0.153:9696/\": dial tcp 10.217.0.153:9696: connect: connection refused" Dec 05 19:25:38 crc kubenswrapper[4828]: I1205 19:25:38.490214 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:25:38 crc kubenswrapper[4828]: I1205 19:25:38.796489 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 19:25:38 crc kubenswrapper[4828]: I1205 19:25:38.797100 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b578dfb2-8a7f-420d-a503-d2eac607b648" containerName="glance-log" containerID="cri-o://fad55517291a1a37c02bfb4fd854ff8de7c34131aa6f5455c37593aced627250" gracePeriod=30 Dec 05 19:25:38 crc kubenswrapper[4828]: I1205 19:25:38.797792 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b578dfb2-8a7f-420d-a503-d2eac607b648" containerName="glance-httpd" containerID="cri-o://892f67ecc83c88204a5b994a1f12ae16e1385d91669efa57e70986ffceaa6285" gracePeriod=30 Dec 05 19:25:38 crc kubenswrapper[4828]: I1205 19:25:38.835690 4828 generic.go:334] "Generic (PLEG): container finished" podID="8eb1a69c-f75a-4899-85d6-504b6a4e7847" containerID="758309287f3d3ad1459ffc40562e676f16a0b20bc4de9067bc7f86aede09712f" exitCode=0 Dec 05 19:25:38 crc kubenswrapper[4828]: I1205 19:25:38.835772 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e150-account-create-update-nhw6k" event={"ID":"8eb1a69c-f75a-4899-85d6-504b6a4e7847","Type":"ContainerDied","Data":"758309287f3d3ad1459ffc40562e676f16a0b20bc4de9067bc7f86aede09712f"} Dec 05 19:25:38 crc kubenswrapper[4828]: I1205 19:25:38.840241 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e71df0c2-5325-4ded-8ba0-691757e3c7e3","Type":"ContainerStarted","Data":"69475276894d4479c70f86f7df3e145f803bea1e40a9068fc78858425e368d78"} Dec 05 19:25:38 crc kubenswrapper[4828]: I1205 19:25:38.840594 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e71df0c2-5325-4ded-8ba0-691757e3c7e3","Type":"ContainerStarted","Data":"4de21da88053854416fd49ea8a02fa6f7a9e864abee6f0019f83f3d3df5e1398"} Dec 05 19:25:38 crc kubenswrapper[4828]: I1205 19:25:38.842562 4828 generic.go:334] "Generic (PLEG): container finished" podID="b10629d6-8c01-43e1-bd8c-67a3f4e2d678" containerID="bb586eed2a6db99ea3888ee9b1fb191dcf3ecbd44849d2f204f7e36f47b7504e" exitCode=0 Dec 05 19:25:38 crc kubenswrapper[4828]: I1205 19:25:38.842629 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2e70-account-create-update-xtvw9" event={"ID":"b10629d6-8c01-43e1-bd8c-67a3f4e2d678","Type":"ContainerDied","Data":"bb586eed2a6db99ea3888ee9b1fb191dcf3ecbd44849d2f204f7e36f47b7504e"} Dec 05 19:25:38 crc kubenswrapper[4828]: I1205 19:25:38.842656 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2e70-account-create-update-xtvw9" event={"ID":"b10629d6-8c01-43e1-bd8c-67a3f4e2d678","Type":"ContainerStarted","Data":"d28824e936241fc522d4cf811c633dc1ba8ef407f05e3e1e7eab290ae943aee3"} Dec 05 19:25:38 crc kubenswrapper[4828]: I1205 19:25:38.844912 4828 generic.go:334] "Generic (PLEG): container finished" podID="b2814910-93ec-4703-ba42-d50f50b7be57" containerID="5635895057f29e6b3afc0ee8d8adec2359c7ecaade6a458f4fff395d74ede90b" exitCode=0 Dec 05 19:25:38 crc kubenswrapper[4828]: I1205 19:25:38.844979 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f06e-account-create-update-dv9jl" event={"ID":"b2814910-93ec-4703-ba42-d50f50b7be57","Type":"ContainerDied","Data":"5635895057f29e6b3afc0ee8d8adec2359c7ecaade6a458f4fff395d74ede90b"} Dec 05 19:25:38 crc kubenswrapper[4828]: I1205 19:25:38.846270 4828 generic.go:334] "Generic (PLEG): container finished" podID="d53c991d-ae7a-4770-a082-a09b5a4340ad" containerID="d62e2b2501ae290f91677a48b482370db05c514733663928dc9d283ced257279" exitCode=0 Dec 05 19:25:38 crc kubenswrapper[4828]: I1205 19:25:38.846307 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kw6q7" event={"ID":"d53c991d-ae7a-4770-a082-a09b5a4340ad","Type":"ContainerDied","Data":"d62e2b2501ae290f91677a48b482370db05c514733663928dc9d283ced257279"} Dec 05 19:25:38 crc kubenswrapper[4828]: I1205 19:25:38.847444 4828 generic.go:334] "Generic (PLEG): container finished" podID="18760301-df61-407a-9aa4-0149bcf012dc" containerID="3007293a3624d3be49f2a7c1e3f5ff30a48aadd18374b7511c2c0ec4f19956f9" exitCode=0 Dec 05 19:25:38 crc kubenswrapper[4828]: I1205 19:25:38.847482 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hff7r" event={"ID":"18760301-df61-407a-9aa4-0149bcf012dc","Type":"ContainerDied","Data":"3007293a3624d3be49f2a7c1e3f5ff30a48aadd18374b7511c2c0ec4f19956f9"} Dec 05 19:25:38 crc kubenswrapper[4828]: I1205 19:25:38.848615 4828 generic.go:334] "Generic (PLEG): container finished" podID="0272a767-af95-4be9-a3ca-bcddcf1c9938" containerID="13be22f4321abc31f124f199379e582241a5997cc814697e5eb5d79ad4e831a2" exitCode=0 Dec 05 19:25:38 crc kubenswrapper[4828]: I1205 19:25:38.848640 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7npt7" event={"ID":"0272a767-af95-4be9-a3ca-bcddcf1c9938","Type":"ContainerDied","Data":"13be22f4321abc31f124f199379e582241a5997cc814697e5eb5d79ad4e831a2"} Dec 05 19:25:39 crc kubenswrapper[4828]: I1205 19:25:39.876418 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e71df0c2-5325-4ded-8ba0-691757e3c7e3","Type":"ContainerStarted","Data":"5bba3920c3855a135a0b07a62cc32e16a5f05e17f2cc0fcbe9ddbf6c125875fb"} Dec 05 19:25:39 crc kubenswrapper[4828]: I1205 19:25:39.897448 4828 generic.go:334] "Generic (PLEG): container finished" podID="59e69475-93aa-4875-8997-7cfa85de4b75" containerID="86547ad61d25d2149d2bbdfa2485b1b8ec2cd2fd2c5897bf04dd073b9e4c7059" exitCode=0 Dec 05 19:25:39 crc kubenswrapper[4828]: I1205 19:25:39.897551 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"59e69475-93aa-4875-8997-7cfa85de4b75","Type":"ContainerDied","Data":"86547ad61d25d2149d2bbdfa2485b1b8ec2cd2fd2c5897bf04dd073b9e4c7059"} Dec 05 19:25:39 crc kubenswrapper[4828]: I1205 19:25:39.902834 4828 generic.go:334] "Generic (PLEG): container finished" podID="b578dfb2-8a7f-420d-a503-d2eac607b648" containerID="fad55517291a1a37c02bfb4fd854ff8de7c34131aa6f5455c37593aced627250" exitCode=143 Dec 05 19:25:39 crc kubenswrapper[4828]: I1205 19:25:39.903063 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b578dfb2-8a7f-420d-a503-d2eac607b648","Type":"ContainerDied","Data":"fad55517291a1a37c02bfb4fd854ff8de7c34131aa6f5455c37593aced627250"} Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.351279 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f06e-account-create-update-dv9jl" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.422094 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2814910-93ec-4703-ba42-d50f50b7be57-operator-scripts\") pod \"b2814910-93ec-4703-ba42-d50f50b7be57\" (UID: \"b2814910-93ec-4703-ba42-d50f50b7be57\") " Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.423397 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2814910-93ec-4703-ba42-d50f50b7be57-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b2814910-93ec-4703-ba42-d50f50b7be57" (UID: "b2814910-93ec-4703-ba42-d50f50b7be57"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.423625 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm2jc\" (UniqueName: \"kubernetes.io/projected/b2814910-93ec-4703-ba42-d50f50b7be57-kube-api-access-rm2jc\") pod \"b2814910-93ec-4703-ba42-d50f50b7be57\" (UID: \"b2814910-93ec-4703-ba42-d50f50b7be57\") " Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.425123 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2814910-93ec-4703-ba42-d50f50b7be57-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.431063 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2814910-93ec-4703-ba42-d50f50b7be57-kube-api-access-rm2jc" (OuterVolumeSpecName: "kube-api-access-rm2jc") pod "b2814910-93ec-4703-ba42-d50f50b7be57" (UID: "b2814910-93ec-4703-ba42-d50f50b7be57"). InnerVolumeSpecName "kube-api-access-rm2jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.527281 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm2jc\" (UniqueName: \"kubernetes.io/projected/b2814910-93ec-4703-ba42-d50f50b7be57-kube-api-access-rm2jc\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.530363 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7npt7" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.543864 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kw6q7" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.577038 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2e70-account-create-update-xtvw9" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.581620 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e150-account-create-update-nhw6k" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.597554 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hff7r" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.614713 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.628940 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhqk4\" (UniqueName: \"kubernetes.io/projected/0272a767-af95-4be9-a3ca-bcddcf1c9938-kube-api-access-dhqk4\") pod \"0272a767-af95-4be9-a3ca-bcddcf1c9938\" (UID: \"0272a767-af95-4be9-a3ca-bcddcf1c9938\") " Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.629030 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59e69475-93aa-4875-8997-7cfa85de4b75-combined-ca-bundle\") pod \"59e69475-93aa-4875-8997-7cfa85de4b75\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.629101 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d53c991d-ae7a-4770-a082-a09b5a4340ad-operator-scripts\") pod \"d53c991d-ae7a-4770-a082-a09b5a4340ad\" (UID: \"d53c991d-ae7a-4770-a082-a09b5a4340ad\") " Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.629180 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59e69475-93aa-4875-8997-7cfa85de4b75-scripts\") pod \"59e69475-93aa-4875-8997-7cfa85de4b75\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.629201 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0272a767-af95-4be9-a3ca-bcddcf1c9938-operator-scripts\") pod \"0272a767-af95-4be9-a3ca-bcddcf1c9938\" (UID: \"0272a767-af95-4be9-a3ca-bcddcf1c9938\") " Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.629243 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59e69475-93aa-4875-8997-7cfa85de4b75-public-tls-certs\") pod \"59e69475-93aa-4875-8997-7cfa85de4b75\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.629264 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18760301-df61-407a-9aa4-0149bcf012dc-operator-scripts\") pod \"18760301-df61-407a-9aa4-0149bcf012dc\" (UID: \"18760301-df61-407a-9aa4-0149bcf012dc\") " Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.629295 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59e69475-93aa-4875-8997-7cfa85de4b75-config-data\") pod \"59e69475-93aa-4875-8997-7cfa85de4b75\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.629353 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59e69475-93aa-4875-8997-7cfa85de4b75-httpd-run\") pod \"59e69475-93aa-4875-8997-7cfa85de4b75\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.629379 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b10629d6-8c01-43e1-bd8c-67a3f4e2d678-operator-scripts\") pod \"b10629d6-8c01-43e1-bd8c-67a3f4e2d678\" (UID: \"b10629d6-8c01-43e1-bd8c-67a3f4e2d678\") " Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.629439 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eb1a69c-f75a-4899-85d6-504b6a4e7847-operator-scripts\") pod \"8eb1a69c-f75a-4899-85d6-504b6a4e7847\" (UID: \"8eb1a69c-f75a-4899-85d6-504b6a4e7847\") " Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.629455 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59e69475-93aa-4875-8997-7cfa85de4b75-logs\") pod \"59e69475-93aa-4875-8997-7cfa85de4b75\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.629494 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs2cw\" (UniqueName: \"kubernetes.io/projected/59e69475-93aa-4875-8997-7cfa85de4b75-kube-api-access-cs2cw\") pod \"59e69475-93aa-4875-8997-7cfa85de4b75\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.629530 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp4pm\" (UniqueName: \"kubernetes.io/projected/d53c991d-ae7a-4770-a082-a09b5a4340ad-kube-api-access-pp4pm\") pod \"d53c991d-ae7a-4770-a082-a09b5a4340ad\" (UID: \"d53c991d-ae7a-4770-a082-a09b5a4340ad\") " Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.629616 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swlxv\" (UniqueName: \"kubernetes.io/projected/8eb1a69c-f75a-4899-85d6-504b6a4e7847-kube-api-access-swlxv\") pod \"8eb1a69c-f75a-4899-85d6-504b6a4e7847\" (UID: \"8eb1a69c-f75a-4899-85d6-504b6a4e7847\") " Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.629638 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"59e69475-93aa-4875-8997-7cfa85de4b75\" (UID: \"59e69475-93aa-4875-8997-7cfa85de4b75\") " Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.629663 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwkmt\" (UniqueName: \"kubernetes.io/projected/18760301-df61-407a-9aa4-0149bcf012dc-kube-api-access-nwkmt\") pod \"18760301-df61-407a-9aa4-0149bcf012dc\" (UID: \"18760301-df61-407a-9aa4-0149bcf012dc\") " Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.629680 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqlb2\" (UniqueName: \"kubernetes.io/projected/b10629d6-8c01-43e1-bd8c-67a3f4e2d678-kube-api-access-kqlb2\") pod \"b10629d6-8c01-43e1-bd8c-67a3f4e2d678\" (UID: \"b10629d6-8c01-43e1-bd8c-67a3f4e2d678\") " Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.632971 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b10629d6-8c01-43e1-bd8c-67a3f4e2d678-kube-api-access-kqlb2" (OuterVolumeSpecName: "kube-api-access-kqlb2") pod "b10629d6-8c01-43e1-bd8c-67a3f4e2d678" (UID: "b10629d6-8c01-43e1-bd8c-67a3f4e2d678"). InnerVolumeSpecName "kube-api-access-kqlb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.635772 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0272a767-af95-4be9-a3ca-bcddcf1c9938-kube-api-access-dhqk4" (OuterVolumeSpecName: "kube-api-access-dhqk4") pod "0272a767-af95-4be9-a3ca-bcddcf1c9938" (UID: "0272a767-af95-4be9-a3ca-bcddcf1c9938"). InnerVolumeSpecName "kube-api-access-dhqk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.636098 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b10629d6-8c01-43e1-bd8c-67a3f4e2d678-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b10629d6-8c01-43e1-bd8c-67a3f4e2d678" (UID: "b10629d6-8c01-43e1-bd8c-67a3f4e2d678"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.636116 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59e69475-93aa-4875-8997-7cfa85de4b75-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "59e69475-93aa-4875-8997-7cfa85de4b75" (UID: "59e69475-93aa-4875-8997-7cfa85de4b75"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.636426 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eb1a69c-f75a-4899-85d6-504b6a4e7847-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8eb1a69c-f75a-4899-85d6-504b6a4e7847" (UID: "8eb1a69c-f75a-4899-85d6-504b6a4e7847"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.639576 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d53c991d-ae7a-4770-a082-a09b5a4340ad-kube-api-access-pp4pm" (OuterVolumeSpecName: "kube-api-access-pp4pm") pod "d53c991d-ae7a-4770-a082-a09b5a4340ad" (UID: "d53c991d-ae7a-4770-a082-a09b5a4340ad"). InnerVolumeSpecName "kube-api-access-pp4pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.641984 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59e69475-93aa-4875-8997-7cfa85de4b75-logs" (OuterVolumeSpecName: "logs") pod "59e69475-93aa-4875-8997-7cfa85de4b75" (UID: "59e69475-93aa-4875-8997-7cfa85de4b75"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.642088 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eb1a69c-f75a-4899-85d6-504b6a4e7847-kube-api-access-swlxv" (OuterVolumeSpecName: "kube-api-access-swlxv") pod "8eb1a69c-f75a-4899-85d6-504b6a4e7847" (UID: "8eb1a69c-f75a-4899-85d6-504b6a4e7847"). InnerVolumeSpecName "kube-api-access-swlxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.642268 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0272a767-af95-4be9-a3ca-bcddcf1c9938-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0272a767-af95-4be9-a3ca-bcddcf1c9938" (UID: "0272a767-af95-4be9-a3ca-bcddcf1c9938"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.642388 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d53c991d-ae7a-4770-a082-a09b5a4340ad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d53c991d-ae7a-4770-a082-a09b5a4340ad" (UID: "d53c991d-ae7a-4770-a082-a09b5a4340ad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.642450 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18760301-df61-407a-9aa4-0149bcf012dc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "18760301-df61-407a-9aa4-0149bcf012dc" (UID: "18760301-df61-407a-9aa4-0149bcf012dc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.644012 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18760301-df61-407a-9aa4-0149bcf012dc-kube-api-access-nwkmt" (OuterVolumeSpecName: "kube-api-access-nwkmt") pod "18760301-df61-407a-9aa4-0149bcf012dc" (UID: "18760301-df61-407a-9aa4-0149bcf012dc"). InnerVolumeSpecName "kube-api-access-nwkmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.644322 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "59e69475-93aa-4875-8997-7cfa85de4b75" (UID: "59e69475-93aa-4875-8997-7cfa85de4b75"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.645118 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59e69475-93aa-4875-8997-7cfa85de4b75-kube-api-access-cs2cw" (OuterVolumeSpecName: "kube-api-access-cs2cw") pod "59e69475-93aa-4875-8997-7cfa85de4b75" (UID: "59e69475-93aa-4875-8997-7cfa85de4b75"). InnerVolumeSpecName "kube-api-access-cs2cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.645762 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59e69475-93aa-4875-8997-7cfa85de4b75-scripts" (OuterVolumeSpecName: "scripts") pod "59e69475-93aa-4875-8997-7cfa85de4b75" (UID: "59e69475-93aa-4875-8997-7cfa85de4b75"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.702958 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59e69475-93aa-4875-8997-7cfa85de4b75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59e69475-93aa-4875-8997-7cfa85de4b75" (UID: "59e69475-93aa-4875-8997-7cfa85de4b75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.716983 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59e69475-93aa-4875-8997-7cfa85de4b75-config-data" (OuterVolumeSpecName: "config-data") pod "59e69475-93aa-4875-8997-7cfa85de4b75" (UID: "59e69475-93aa-4875-8997-7cfa85de4b75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.732222 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59e69475-93aa-4875-8997-7cfa85de4b75-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.732516 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0272a767-af95-4be9-a3ca-bcddcf1c9938-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.732610 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18760301-df61-407a-9aa4-0149bcf012dc-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.732725 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59e69475-93aa-4875-8997-7cfa85de4b75-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.732810 4828 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59e69475-93aa-4875-8997-7cfa85de4b75-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.732914 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b10629d6-8c01-43e1-bd8c-67a3f4e2d678-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.732994 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eb1a69c-f75a-4899-85d6-504b6a4e7847-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.733078 4828 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59e69475-93aa-4875-8997-7cfa85de4b75-logs\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.733170 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cs2cw\" (UniqueName: \"kubernetes.io/projected/59e69475-93aa-4875-8997-7cfa85de4b75-kube-api-access-cs2cw\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.733268 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp4pm\" (UniqueName: \"kubernetes.io/projected/d53c991d-ae7a-4770-a082-a09b5a4340ad-kube-api-access-pp4pm\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.733359 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swlxv\" (UniqueName: \"kubernetes.io/projected/8eb1a69c-f75a-4899-85d6-504b6a4e7847-kube-api-access-swlxv\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.733470 4828 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.733570 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwkmt\" (UniqueName: \"kubernetes.io/projected/18760301-df61-407a-9aa4-0149bcf012dc-kube-api-access-nwkmt\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.733652 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqlb2\" (UniqueName: \"kubernetes.io/projected/b10629d6-8c01-43e1-bd8c-67a3f4e2d678-kube-api-access-kqlb2\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.733744 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhqk4\" (UniqueName: \"kubernetes.io/projected/0272a767-af95-4be9-a3ca-bcddcf1c9938-kube-api-access-dhqk4\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.733843 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59e69475-93aa-4875-8997-7cfa85de4b75-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.733926 4828 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d53c991d-ae7a-4770-a082-a09b5a4340ad-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.767164 4828 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.777234 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59e69475-93aa-4875-8997-7cfa85de4b75-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "59e69475-93aa-4875-8997-7cfa85de4b75" (UID: "59e69475-93aa-4875-8997-7cfa85de4b75"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.835809 4828 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59e69475-93aa-4875-8997-7cfa85de4b75-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.836127 4828 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.912835 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.912836 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"59e69475-93aa-4875-8997-7cfa85de4b75","Type":"ContainerDied","Data":"1a74db463782e10887dbe348094f3127de5388a3531fcaa0d5b2a9fdd72b0d51"} Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.912905 4828 scope.go:117] "RemoveContainer" containerID="86547ad61d25d2149d2bbdfa2485b1b8ec2cd2fd2c5897bf04dd073b9e4c7059" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.915765 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f06e-account-create-update-dv9jl" event={"ID":"b2814910-93ec-4703-ba42-d50f50b7be57","Type":"ContainerDied","Data":"f348f6f279f7fbe0c2054f9033412d5467c44f8ef6d6ac99ae48cb9d0092e5c2"} Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.915805 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f348f6f279f7fbe0c2054f9033412d5467c44f8ef6d6ac99ae48cb9d0092e5c2" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.916130 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f06e-account-create-update-dv9jl" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.919011 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2e70-account-create-update-xtvw9" event={"ID":"b10629d6-8c01-43e1-bd8c-67a3f4e2d678","Type":"ContainerDied","Data":"d28824e936241fc522d4cf811c633dc1ba8ef407f05e3e1e7eab290ae943aee3"} Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.919125 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d28824e936241fc522d4cf811c633dc1ba8ef407f05e3e1e7eab290ae943aee3" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.919184 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2e70-account-create-update-xtvw9" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.921763 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kw6q7" event={"ID":"d53c991d-ae7a-4770-a082-a09b5a4340ad","Type":"ContainerDied","Data":"bf8a10186c360abac9380cbb9bc1e9783778eefaa173063e3b64070dd38d7921"} Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.921799 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf8a10186c360abac9380cbb9bc1e9783778eefaa173063e3b64070dd38d7921" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.921870 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kw6q7" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.926556 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hff7r" event={"ID":"18760301-df61-407a-9aa4-0149bcf012dc","Type":"ContainerDied","Data":"6c89474fbc6b3a0702e3d5b46ce6feb0fe12c9009651b7cedb731df46abfd9b6"} Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.926592 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c89474fbc6b3a0702e3d5b46ce6feb0fe12c9009651b7cedb731df46abfd9b6" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.926646 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hff7r" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.933854 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7npt7" event={"ID":"0272a767-af95-4be9-a3ca-bcddcf1c9938","Type":"ContainerDied","Data":"39163e1e0238244faff54e0afc0e441b9143fe7c4c4280d3a905e10a322a4037"} Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.933889 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39163e1e0238244faff54e0afc0e441b9143fe7c4c4280d3a905e10a322a4037" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.933945 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7npt7" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.938310 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e150-account-create-update-nhw6k" event={"ID":"8eb1a69c-f75a-4899-85d6-504b6a4e7847","Type":"ContainerDied","Data":"68fb746b7d43aaf0f1dce84425ae8c3f81ac187609dda6639f12d3e43ee33ee0"} Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.938346 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68fb746b7d43aaf0f1dce84425ae8c3f81ac187609dda6639f12d3e43ee33ee0" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.938398 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e150-account-create-update-nhw6k" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.953733 4828 generic.go:334] "Generic (PLEG): container finished" podID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" containerID="430af8e018b4db94e5fbc1658ab5c48af8bdcbbed4d9e9f4a8b1c4d49b774c99" exitCode=1 Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.953877 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" event={"ID":"03c4fc5d-6be1-47b4-9c39-7bb86046dafd","Type":"ContainerDied","Data":"430af8e018b4db94e5fbc1658ab5c48af8bdcbbed4d9e9f4a8b1c4d49b774c99"} Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.954563 4828 scope.go:117] "RemoveContainer" containerID="430af8e018b4db94e5fbc1658ab5c48af8bdcbbed4d9e9f4a8b1c4d49b774c99" Dec 05 19:25:40 crc kubenswrapper[4828]: E1205 19:25:40.954911 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.963500 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e71df0c2-5325-4ded-8ba0-691757e3c7e3","Type":"ContainerStarted","Data":"4cf5462488e7a535a2e168288d55a064c1c1e7cbb139cdc56b30de2e042c3d55"} Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.963652 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e71df0c2-5325-4ded-8ba0-691757e3c7e3" containerName="ceilometer-central-agent" containerID="cri-o://4de21da88053854416fd49ea8a02fa6f7a9e864abee6f0019f83f3d3df5e1398" gracePeriod=30 Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.963759 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.963853 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e71df0c2-5325-4ded-8ba0-691757e3c7e3" containerName="proxy-httpd" containerID="cri-o://4cf5462488e7a535a2e168288d55a064c1c1e7cbb139cdc56b30de2e042c3d55" gracePeriod=30 Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.963910 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e71df0c2-5325-4ded-8ba0-691757e3c7e3" containerName="sg-core" containerID="cri-o://5bba3920c3855a135a0b07a62cc32e16a5f05e17f2cc0fcbe9ddbf6c125875fb" gracePeriod=30 Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.963955 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e71df0c2-5325-4ded-8ba0-691757e3c7e3" containerName="ceilometer-notification-agent" containerID="cri-o://69475276894d4479c70f86f7df3e145f803bea1e40a9068fc78858425e368d78" gracePeriod=30 Dec 05 19:25:40 crc kubenswrapper[4828]: I1205 19:25:40.972530 4828 scope.go:117] "RemoveContainer" containerID="439b8fa3d319ffe2baabb3df6f255b4d1f96aa584e83fdf10c89904eba3fbd98" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.019350 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.626776899 podStartE2EDuration="5.019329937s" podCreationTimestamp="2025-12-05 19:25:36 +0000 UTC" firstStartedPulling="2025-12-05 19:25:37.214639446 +0000 UTC m=+1315.109861762" lastFinishedPulling="2025-12-05 19:25:40.607192494 +0000 UTC m=+1318.502414800" observedRunningTime="2025-12-05 19:25:41.006435078 +0000 UTC m=+1318.901657384" watchObservedRunningTime="2025-12-05 19:25:41.019329937 +0000 UTC m=+1318.914552243" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.038056 4828 scope.go:117] "RemoveContainer" containerID="5ae29702b9c693bc225b109d7199f5610a3f177228a2fa0ff8ce44ca6c251dda" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.038329 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.067568 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.138951 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 19:25:41 crc kubenswrapper[4828]: E1205 19:25:41.139356 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d53c991d-ae7a-4770-a082-a09b5a4340ad" containerName="mariadb-database-create" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.139368 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="d53c991d-ae7a-4770-a082-a09b5a4340ad" containerName="mariadb-database-create" Dec 05 19:25:41 crc kubenswrapper[4828]: E1205 19:25:41.139390 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e69475-93aa-4875-8997-7cfa85de4b75" containerName="glance-httpd" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.139395 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e69475-93aa-4875-8997-7cfa85de4b75" containerName="glance-httpd" Dec 05 19:25:41 crc kubenswrapper[4828]: E1205 19:25:41.139408 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b10629d6-8c01-43e1-bd8c-67a3f4e2d678" containerName="mariadb-account-create-update" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.139415 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="b10629d6-8c01-43e1-bd8c-67a3f4e2d678" containerName="mariadb-account-create-update" Dec 05 19:25:41 crc kubenswrapper[4828]: E1205 19:25:41.139422 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e69475-93aa-4875-8997-7cfa85de4b75" containerName="glance-log" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.139429 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e69475-93aa-4875-8997-7cfa85de4b75" containerName="glance-log" Dec 05 19:25:41 crc kubenswrapper[4828]: E1205 19:25:41.139440 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eb1a69c-f75a-4899-85d6-504b6a4e7847" containerName="mariadb-account-create-update" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.139446 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eb1a69c-f75a-4899-85d6-504b6a4e7847" containerName="mariadb-account-create-update" Dec 05 19:25:41 crc kubenswrapper[4828]: E1205 19:25:41.139456 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0272a767-af95-4be9-a3ca-bcddcf1c9938" containerName="mariadb-database-create" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.139462 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="0272a767-af95-4be9-a3ca-bcddcf1c9938" containerName="mariadb-database-create" Dec 05 19:25:41 crc kubenswrapper[4828]: E1205 19:25:41.139474 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18760301-df61-407a-9aa4-0149bcf012dc" containerName="mariadb-database-create" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.139480 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="18760301-df61-407a-9aa4-0149bcf012dc" containerName="mariadb-database-create" Dec 05 19:25:41 crc kubenswrapper[4828]: E1205 19:25:41.139488 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2814910-93ec-4703-ba42-d50f50b7be57" containerName="mariadb-account-create-update" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.139494 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2814910-93ec-4703-ba42-d50f50b7be57" containerName="mariadb-account-create-update" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.164251 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e69475-93aa-4875-8997-7cfa85de4b75" containerName="glance-httpd" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.164342 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2814910-93ec-4703-ba42-d50f50b7be57" containerName="mariadb-account-create-update" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.164357 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="0272a767-af95-4be9-a3ca-bcddcf1c9938" containerName="mariadb-database-create" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.164366 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e69475-93aa-4875-8997-7cfa85de4b75" containerName="glance-log" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.164389 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="b10629d6-8c01-43e1-bd8c-67a3f4e2d678" containerName="mariadb-account-create-update" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.164419 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eb1a69c-f75a-4899-85d6-504b6a4e7847" containerName="mariadb-account-create-update" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.164438 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="18760301-df61-407a-9aa4-0149bcf012dc" containerName="mariadb-database-create" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.164456 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="d53c991d-ae7a-4770-a082-a09b5a4340ad" containerName="mariadb-database-create" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.167666 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.182966 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.183200 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.210498 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.342314 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ee563d9-a334-428c-8d24-b0b1438e8ee8-scripts\") pod \"glance-default-external-api-0\" (UID: \"8ee563d9-a334-428c-8d24-b0b1438e8ee8\") " pod="openstack/glance-default-external-api-0" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.342378 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ee563d9-a334-428c-8d24-b0b1438e8ee8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8ee563d9-a334-428c-8d24-b0b1438e8ee8\") " pod="openstack/glance-default-external-api-0" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.342399 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ee563d9-a334-428c-8d24-b0b1438e8ee8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8ee563d9-a334-428c-8d24-b0b1438e8ee8\") " pod="openstack/glance-default-external-api-0" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.342437 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"8ee563d9-a334-428c-8d24-b0b1438e8ee8\") " pod="openstack/glance-default-external-api-0" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.342464 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ee563d9-a334-428c-8d24-b0b1438e8ee8-logs\") pod \"glance-default-external-api-0\" (UID: \"8ee563d9-a334-428c-8d24-b0b1438e8ee8\") " pod="openstack/glance-default-external-api-0" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.342486 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6fh8\" (UniqueName: \"kubernetes.io/projected/8ee563d9-a334-428c-8d24-b0b1438e8ee8-kube-api-access-d6fh8\") pod \"glance-default-external-api-0\" (UID: \"8ee563d9-a334-428c-8d24-b0b1438e8ee8\") " pod="openstack/glance-default-external-api-0" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.342507 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ee563d9-a334-428c-8d24-b0b1438e8ee8-config-data\") pod \"glance-default-external-api-0\" (UID: \"8ee563d9-a334-428c-8d24-b0b1438e8ee8\") " pod="openstack/glance-default-external-api-0" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.342532 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ee563d9-a334-428c-8d24-b0b1438e8ee8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8ee563d9-a334-428c-8d24-b0b1438e8ee8\") " pod="openstack/glance-default-external-api-0" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.443698 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ee563d9-a334-428c-8d24-b0b1438e8ee8-scripts\") pod \"glance-default-external-api-0\" (UID: \"8ee563d9-a334-428c-8d24-b0b1438e8ee8\") " pod="openstack/glance-default-external-api-0" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.443789 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ee563d9-a334-428c-8d24-b0b1438e8ee8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8ee563d9-a334-428c-8d24-b0b1438e8ee8\") " pod="openstack/glance-default-external-api-0" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.443817 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ee563d9-a334-428c-8d24-b0b1438e8ee8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8ee563d9-a334-428c-8d24-b0b1438e8ee8\") " pod="openstack/glance-default-external-api-0" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.443890 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"8ee563d9-a334-428c-8d24-b0b1438e8ee8\") " pod="openstack/glance-default-external-api-0" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.443919 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ee563d9-a334-428c-8d24-b0b1438e8ee8-logs\") pod \"glance-default-external-api-0\" (UID: \"8ee563d9-a334-428c-8d24-b0b1438e8ee8\") " pod="openstack/glance-default-external-api-0" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.443940 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6fh8\" (UniqueName: \"kubernetes.io/projected/8ee563d9-a334-428c-8d24-b0b1438e8ee8-kube-api-access-d6fh8\") pod \"glance-default-external-api-0\" (UID: \"8ee563d9-a334-428c-8d24-b0b1438e8ee8\") " pod="openstack/glance-default-external-api-0" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.443963 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ee563d9-a334-428c-8d24-b0b1438e8ee8-config-data\") pod \"glance-default-external-api-0\" (UID: \"8ee563d9-a334-428c-8d24-b0b1438e8ee8\") " pod="openstack/glance-default-external-api-0" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.443991 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ee563d9-a334-428c-8d24-b0b1438e8ee8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8ee563d9-a334-428c-8d24-b0b1438e8ee8\") " pod="openstack/glance-default-external-api-0" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.445364 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"8ee563d9-a334-428c-8d24-b0b1438e8ee8\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.445652 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ee563d9-a334-428c-8d24-b0b1438e8ee8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8ee563d9-a334-428c-8d24-b0b1438e8ee8\") " pod="openstack/glance-default-external-api-0" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.447396 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ee563d9-a334-428c-8d24-b0b1438e8ee8-logs\") pod \"glance-default-external-api-0\" (UID: \"8ee563d9-a334-428c-8d24-b0b1438e8ee8\") " pod="openstack/glance-default-external-api-0" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.453902 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ee563d9-a334-428c-8d24-b0b1438e8ee8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8ee563d9-a334-428c-8d24-b0b1438e8ee8\") " pod="openstack/glance-default-external-api-0" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.460310 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ee563d9-a334-428c-8d24-b0b1438e8ee8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8ee563d9-a334-428c-8d24-b0b1438e8ee8\") " pod="openstack/glance-default-external-api-0" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.460940 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ee563d9-a334-428c-8d24-b0b1438e8ee8-config-data\") pod \"glance-default-external-api-0\" (UID: \"8ee563d9-a334-428c-8d24-b0b1438e8ee8\") " pod="openstack/glance-default-external-api-0" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.464438 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ee563d9-a334-428c-8d24-b0b1438e8ee8-scripts\") pod \"glance-default-external-api-0\" (UID: \"8ee563d9-a334-428c-8d24-b0b1438e8ee8\") " pod="openstack/glance-default-external-api-0" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.489761 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6fh8\" (UniqueName: \"kubernetes.io/projected/8ee563d9-a334-428c-8d24-b0b1438e8ee8-kube-api-access-d6fh8\") pod \"glance-default-external-api-0\" (UID: \"8ee563d9-a334-428c-8d24-b0b1438e8ee8\") " pod="openstack/glance-default-external-api-0" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.513804 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"8ee563d9-a334-428c-8d24-b0b1438e8ee8\") " pod="openstack/glance-default-external-api-0" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.564406 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.974726 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-596dd8d85b-r59nh_a89115ab-f300-433f-934e-dce679bf1877/neutron-api/0.log" Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.975022 4828 generic.go:334] "Generic (PLEG): container finished" podID="a89115ab-f300-433f-934e-dce679bf1877" containerID="8b38dbdcc9480ed1ff926da8258e3e3ea50c6e23206ed3a90de306872e659f34" exitCode=137 Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.975074 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-596dd8d85b-r59nh" event={"ID":"a89115ab-f300-433f-934e-dce679bf1877","Type":"ContainerDied","Data":"8b38dbdcc9480ed1ff926da8258e3e3ea50c6e23206ed3a90de306872e659f34"} Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.977510 4828 generic.go:334] "Generic (PLEG): container finished" podID="e71df0c2-5325-4ded-8ba0-691757e3c7e3" containerID="4cf5462488e7a535a2e168288d55a064c1c1e7cbb139cdc56b30de2e042c3d55" exitCode=0 Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.977531 4828 generic.go:334] "Generic (PLEG): container finished" podID="e71df0c2-5325-4ded-8ba0-691757e3c7e3" containerID="5bba3920c3855a135a0b07a62cc32e16a5f05e17f2cc0fcbe9ddbf6c125875fb" exitCode=2 Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.977538 4828 generic.go:334] "Generic (PLEG): container finished" podID="e71df0c2-5325-4ded-8ba0-691757e3c7e3" containerID="69475276894d4479c70f86f7df3e145f803bea1e40a9068fc78858425e368d78" exitCode=0 Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.977587 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e71df0c2-5325-4ded-8ba0-691757e3c7e3","Type":"ContainerDied","Data":"4cf5462488e7a535a2e168288d55a064c1c1e7cbb139cdc56b30de2e042c3d55"} Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.977603 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e71df0c2-5325-4ded-8ba0-691757e3c7e3","Type":"ContainerDied","Data":"5bba3920c3855a135a0b07a62cc32e16a5f05e17f2cc0fcbe9ddbf6c125875fb"} Dec 05 19:25:41 crc kubenswrapper[4828]: I1205 19:25:41.977612 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e71df0c2-5325-4ded-8ba0-691757e3c7e3","Type":"ContainerDied","Data":"69475276894d4479c70f86f7df3e145f803bea1e40a9068fc78858425e368d78"} Dec 05 19:25:42 crc kubenswrapper[4828]: I1205 19:25:42.153528 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-596dd8d85b-r59nh_a89115ab-f300-433f-934e-dce679bf1877/neutron-api/0.log" Dec 05 19:25:42 crc kubenswrapper[4828]: I1205 19:25:42.153614 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-596dd8d85b-r59nh" Dec 05 19:25:42 crc kubenswrapper[4828]: I1205 19:25:42.188960 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 19:25:42 crc kubenswrapper[4828]: W1205 19:25:42.210322 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ee563d9_a334_428c_8d24_b0b1438e8ee8.slice/crio-7fe621609b97f0f3b47af00c40dd0d044d098c817e04980239351e289ba88997 WatchSource:0}: Error finding container 7fe621609b97f0f3b47af00c40dd0d044d098c817e04980239351e289ba88997: Status 404 returned error can't find the container with id 7fe621609b97f0f3b47af00c40dd0d044d098c817e04980239351e289ba88997 Dec 05 19:25:42 crc kubenswrapper[4828]: I1205 19:25:42.259832 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a89115ab-f300-433f-934e-dce679bf1877-config\") pod \"a89115ab-f300-433f-934e-dce679bf1877\" (UID: \"a89115ab-f300-433f-934e-dce679bf1877\") " Dec 05 19:25:42 crc kubenswrapper[4828]: I1205 19:25:42.259910 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a89115ab-f300-433f-934e-dce679bf1877-httpd-config\") pod \"a89115ab-f300-433f-934e-dce679bf1877\" (UID: \"a89115ab-f300-433f-934e-dce679bf1877\") " Dec 05 19:25:42 crc kubenswrapper[4828]: I1205 19:25:42.259953 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a89115ab-f300-433f-934e-dce679bf1877-combined-ca-bundle\") pod \"a89115ab-f300-433f-934e-dce679bf1877\" (UID: \"a89115ab-f300-433f-934e-dce679bf1877\") " Dec 05 19:25:42 crc kubenswrapper[4828]: I1205 19:25:42.260049 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a89115ab-f300-433f-934e-dce679bf1877-ovndb-tls-certs\") pod \"a89115ab-f300-433f-934e-dce679bf1877\" (UID: \"a89115ab-f300-433f-934e-dce679bf1877\") " Dec 05 19:25:42 crc kubenswrapper[4828]: I1205 19:25:42.260090 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28mbx\" (UniqueName: \"kubernetes.io/projected/a89115ab-f300-433f-934e-dce679bf1877-kube-api-access-28mbx\") pod \"a89115ab-f300-433f-934e-dce679bf1877\" (UID: \"a89115ab-f300-433f-934e-dce679bf1877\") " Dec 05 19:25:42 crc kubenswrapper[4828]: I1205 19:25:42.279842 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a89115ab-f300-433f-934e-dce679bf1877-kube-api-access-28mbx" (OuterVolumeSpecName: "kube-api-access-28mbx") pod "a89115ab-f300-433f-934e-dce679bf1877" (UID: "a89115ab-f300-433f-934e-dce679bf1877"). InnerVolumeSpecName "kube-api-access-28mbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:25:42 crc kubenswrapper[4828]: I1205 19:25:42.281947 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a89115ab-f300-433f-934e-dce679bf1877-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a89115ab-f300-433f-934e-dce679bf1877" (UID: "a89115ab-f300-433f-934e-dce679bf1877"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:42 crc kubenswrapper[4828]: I1205 19:25:42.330417 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a89115ab-f300-433f-934e-dce679bf1877-config" (OuterVolumeSpecName: "config") pod "a89115ab-f300-433f-934e-dce679bf1877" (UID: "a89115ab-f300-433f-934e-dce679bf1877"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:42 crc kubenswrapper[4828]: I1205 19:25:42.332693 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a89115ab-f300-433f-934e-dce679bf1877-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a89115ab-f300-433f-934e-dce679bf1877" (UID: "a89115ab-f300-433f-934e-dce679bf1877"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:42 crc kubenswrapper[4828]: I1205 19:25:42.355278 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a89115ab-f300-433f-934e-dce679bf1877-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a89115ab-f300-433f-934e-dce679bf1877" (UID: "a89115ab-f300-433f-934e-dce679bf1877"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:42 crc kubenswrapper[4828]: I1205 19:25:42.362628 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a89115ab-f300-433f-934e-dce679bf1877-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:42 crc kubenswrapper[4828]: I1205 19:25:42.362668 4828 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a89115ab-f300-433f-934e-dce679bf1877-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:42 crc kubenswrapper[4828]: I1205 19:25:42.362681 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a89115ab-f300-433f-934e-dce679bf1877-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:42 crc kubenswrapper[4828]: I1205 19:25:42.362694 4828 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a89115ab-f300-433f-934e-dce679bf1877-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:42 crc kubenswrapper[4828]: I1205 19:25:42.362709 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28mbx\" (UniqueName: \"kubernetes.io/projected/a89115ab-f300-433f-934e-dce679bf1877-kube-api-access-28mbx\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:42 crc kubenswrapper[4828]: I1205 19:25:42.459281 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59e69475-93aa-4875-8997-7cfa85de4b75" path="/var/lib/kubelet/pods/59e69475-93aa-4875-8997-7cfa85de4b75/volumes" Dec 05 19:25:42 crc kubenswrapper[4828]: I1205 19:25:42.992103 4828 generic.go:334] "Generic (PLEG): container finished" podID="b578dfb2-8a7f-420d-a503-d2eac607b648" containerID="892f67ecc83c88204a5b994a1f12ae16e1385d91669efa57e70986ffceaa6285" exitCode=0 Dec 05 19:25:42 crc kubenswrapper[4828]: I1205 19:25:42.992166 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b578dfb2-8a7f-420d-a503-d2eac607b648","Type":"ContainerDied","Data":"892f67ecc83c88204a5b994a1f12ae16e1385d91669efa57e70986ffceaa6285"} Dec 05 19:25:42 crc kubenswrapper[4828]: I1205 19:25:42.997957 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-596dd8d85b-r59nh_a89115ab-f300-433f-934e-dce679bf1877/neutron-api/0.log" Dec 05 19:25:42 crc kubenswrapper[4828]: I1205 19:25:42.998065 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-596dd8d85b-r59nh" event={"ID":"a89115ab-f300-433f-934e-dce679bf1877","Type":"ContainerDied","Data":"b812090d9f30edb16dad00f569ea2264fe06cefa3ebe3d334a7dcc455f1f6807"} Dec 05 19:25:42 crc kubenswrapper[4828]: I1205 19:25:42.998071 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-596dd8d85b-r59nh" Dec 05 19:25:42 crc kubenswrapper[4828]: I1205 19:25:42.998109 4828 scope.go:117] "RemoveContainer" containerID="66beaa93c902cca0bf08c8dbbf7ea351d4526dadc4e93efeaefa2d3e1b495cf1" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.003544 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8ee563d9-a334-428c-8d24-b0b1438e8ee8","Type":"ContainerStarted","Data":"3fb58be0fd946ce968d2433e0acf34f8dcf40456b709ba9ebcf181712d52e155"} Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.003579 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8ee563d9-a334-428c-8d24-b0b1438e8ee8","Type":"ContainerStarted","Data":"7fe621609b97f0f3b47af00c40dd0d044d098c817e04980239351e289ba88997"} Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.045003 4828 scope.go:117] "RemoveContainer" containerID="8b38dbdcc9480ed1ff926da8258e3e3ea50c6e23206ed3a90de306872e659f34" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.057873 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-596dd8d85b-r59nh"] Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.072440 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-596dd8d85b-r59nh"] Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.590046 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.686778 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b578dfb2-8a7f-420d-a503-d2eac607b648-logs\") pod \"b578dfb2-8a7f-420d-a503-d2eac607b648\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.686842 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctxwp\" (UniqueName: \"kubernetes.io/projected/b578dfb2-8a7f-420d-a503-d2eac607b648-kube-api-access-ctxwp\") pod \"b578dfb2-8a7f-420d-a503-d2eac607b648\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.686864 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b578dfb2-8a7f-420d-a503-d2eac607b648-config-data\") pod \"b578dfb2-8a7f-420d-a503-d2eac607b648\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.686905 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b578dfb2-8a7f-420d-a503-d2eac607b648-scripts\") pod \"b578dfb2-8a7f-420d-a503-d2eac607b648\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.687009 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"b578dfb2-8a7f-420d-a503-d2eac607b648\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.687143 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b578dfb2-8a7f-420d-a503-d2eac607b648-combined-ca-bundle\") pod \"b578dfb2-8a7f-420d-a503-d2eac607b648\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.687174 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b578dfb2-8a7f-420d-a503-d2eac607b648-httpd-run\") pod \"b578dfb2-8a7f-420d-a503-d2eac607b648\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.687190 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b578dfb2-8a7f-420d-a503-d2eac607b648-internal-tls-certs\") pod \"b578dfb2-8a7f-420d-a503-d2eac607b648\" (UID: \"b578dfb2-8a7f-420d-a503-d2eac607b648\") " Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.692928 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b578dfb2-8a7f-420d-a503-d2eac607b648-logs" (OuterVolumeSpecName: "logs") pod "b578dfb2-8a7f-420d-a503-d2eac607b648" (UID: "b578dfb2-8a7f-420d-a503-d2eac607b648"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.696708 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b578dfb2-8a7f-420d-a503-d2eac607b648-scripts" (OuterVolumeSpecName: "scripts") pod "b578dfb2-8a7f-420d-a503-d2eac607b648" (UID: "b578dfb2-8a7f-420d-a503-d2eac607b648"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.697441 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b578dfb2-8a7f-420d-a503-d2eac607b648-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b578dfb2-8a7f-420d-a503-d2eac607b648" (UID: "b578dfb2-8a7f-420d-a503-d2eac607b648"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.697573 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b578dfb2-8a7f-420d-a503-d2eac607b648-kube-api-access-ctxwp" (OuterVolumeSpecName: "kube-api-access-ctxwp") pod "b578dfb2-8a7f-420d-a503-d2eac607b648" (UID: "b578dfb2-8a7f-420d-a503-d2eac607b648"). InnerVolumeSpecName "kube-api-access-ctxwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.705055 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "b578dfb2-8a7f-420d-a503-d2eac607b648" (UID: "b578dfb2-8a7f-420d-a503-d2eac607b648"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.752806 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-699b69c564-442lb" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.753319 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b578dfb2-8a7f-420d-a503-d2eac607b648-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b578dfb2-8a7f-420d-a503-d2eac607b648" (UID: "b578dfb2-8a7f-420d-a503-d2eac607b648"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.754592 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b578dfb2-8a7f-420d-a503-d2eac607b648-config-data" (OuterVolumeSpecName: "config-data") pod "b578dfb2-8a7f-420d-a503-d2eac607b648" (UID: "b578dfb2-8a7f-420d-a503-d2eac607b648"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.767782 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b578dfb2-8a7f-420d-a503-d2eac607b648-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b578dfb2-8a7f-420d-a503-d2eac607b648" (UID: "b578dfb2-8a7f-420d-a503-d2eac607b648"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.788845 4828 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.788883 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b578dfb2-8a7f-420d-a503-d2eac607b648-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.788900 4828 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b578dfb2-8a7f-420d-a503-d2eac607b648-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.788911 4828 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b578dfb2-8a7f-420d-a503-d2eac607b648-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.788934 4828 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b578dfb2-8a7f-420d-a503-d2eac607b648-logs\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.788945 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctxwp\" (UniqueName: \"kubernetes.io/projected/b578dfb2-8a7f-420d-a503-d2eac607b648-kube-api-access-ctxwp\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.788957 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b578dfb2-8a7f-420d-a503-d2eac607b648-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.788968 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b578dfb2-8a7f-420d-a503-d2eac607b648-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.810932 4828 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.890187 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-config-data\") pod \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\" (UID: \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\") " Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.890268 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-horizon-tls-certs\") pod \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\" (UID: \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\") " Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.890384 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-horizon-secret-key\") pod \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\" (UID: \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\") " Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.890448 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-combined-ca-bundle\") pod \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\" (UID: \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\") " Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.890521 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-logs\") pod \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\" (UID: \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\") " Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.890554 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjz7z\" (UniqueName: \"kubernetes.io/projected/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-kube-api-access-gjz7z\") pod \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\" (UID: \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\") " Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.890616 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-scripts\") pod \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\" (UID: \"74df4612-463b-4b3c-8f2d-7dbb9494d6fe\") " Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.891055 4828 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.892753 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-logs" (OuterVolumeSpecName: "logs") pod "74df4612-463b-4b3c-8f2d-7dbb9494d6fe" (UID: "74df4612-463b-4b3c-8f2d-7dbb9494d6fe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.896340 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "74df4612-463b-4b3c-8f2d-7dbb9494d6fe" (UID: "74df4612-463b-4b3c-8f2d-7dbb9494d6fe"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.897048 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-kube-api-access-gjz7z" (OuterVolumeSpecName: "kube-api-access-gjz7z") pod "74df4612-463b-4b3c-8f2d-7dbb9494d6fe" (UID: "74df4612-463b-4b3c-8f2d-7dbb9494d6fe"). InnerVolumeSpecName "kube-api-access-gjz7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.917351 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-config-data" (OuterVolumeSpecName: "config-data") pod "74df4612-463b-4b3c-8f2d-7dbb9494d6fe" (UID: "74df4612-463b-4b3c-8f2d-7dbb9494d6fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.919233 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-scripts" (OuterVolumeSpecName: "scripts") pod "74df4612-463b-4b3c-8f2d-7dbb9494d6fe" (UID: "74df4612-463b-4b3c-8f2d-7dbb9494d6fe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.924442 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74df4612-463b-4b3c-8f2d-7dbb9494d6fe" (UID: "74df4612-463b-4b3c-8f2d-7dbb9494d6fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.942233 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "74df4612-463b-4b3c-8f2d-7dbb9494d6fe" (UID: "74df4612-463b-4b3c-8f2d-7dbb9494d6fe"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.992751 4828 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.993921 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.993983 4828 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-logs\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.994041 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjz7z\" (UniqueName: \"kubernetes.io/projected/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-kube-api-access-gjz7z\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.994111 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.994167 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:43 crc kubenswrapper[4828]: I1205 19:25:43.994223 4828 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/74df4612-463b-4b3c-8f2d-7dbb9494d6fe-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.012490 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b578dfb2-8a7f-420d-a503-d2eac607b648","Type":"ContainerDied","Data":"195942677f11a9cc63c0d603d8e517b51e29cc1cd39b3ed94458f07896d71852"} Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.012656 4828 scope.go:117] "RemoveContainer" containerID="892f67ecc83c88204a5b994a1f12ae16e1385d91669efa57e70986ffceaa6285" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.012810 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.022940 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8ee563d9-a334-428c-8d24-b0b1438e8ee8","Type":"ContainerStarted","Data":"a233e6eca6b27f278ac2662b9fa763341e2db8ac0e68e99019dc0509cf1bd918"} Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.027768 4828 generic.go:334] "Generic (PLEG): container finished" podID="74df4612-463b-4b3c-8f2d-7dbb9494d6fe" containerID="389a2d18b31e186a7ee496a4e107afa7e6779b1f1e859afe1edb6a6e9265ec50" exitCode=137 Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.027839 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-699b69c564-442lb" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.027877 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-699b69c564-442lb" event={"ID":"74df4612-463b-4b3c-8f2d-7dbb9494d6fe","Type":"ContainerDied","Data":"389a2d18b31e186a7ee496a4e107afa7e6779b1f1e859afe1edb6a6e9265ec50"} Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.028411 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-699b69c564-442lb" event={"ID":"74df4612-463b-4b3c-8f2d-7dbb9494d6fe","Type":"ContainerDied","Data":"99f2205c2e3f2b58388c4dbd464dddffe0015265cc6fc396bf3e7bae5835640a"} Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.047357 4828 scope.go:117] "RemoveContainer" containerID="fad55517291a1a37c02bfb4fd854ff8de7c34131aa6f5455c37593aced627250" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.052341 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.052317295 podStartE2EDuration="3.052317295s" podCreationTimestamp="2025-12-05 19:25:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:25:44.042266813 +0000 UTC m=+1321.937489129" watchObservedRunningTime="2025-12-05 19:25:44.052317295 +0000 UTC m=+1321.947539601" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.071978 4828 scope.go:117] "RemoveContainer" containerID="fa40e13fba0df0483b771e91be21b66eca689a8961b6c27a37b0b215fbdc56b9" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.076858 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.086106 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.105956 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-699b69c564-442lb"] Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.109741 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 19:25:44 crc kubenswrapper[4828]: E1205 19:25:44.110281 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a89115ab-f300-433f-934e-dce679bf1877" containerName="neutron-api" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.110304 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="a89115ab-f300-433f-934e-dce679bf1877" containerName="neutron-api" Dec 05 19:25:44 crc kubenswrapper[4828]: E1205 19:25:44.110319 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74df4612-463b-4b3c-8f2d-7dbb9494d6fe" containerName="horizon" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.110326 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="74df4612-463b-4b3c-8f2d-7dbb9494d6fe" containerName="horizon" Dec 05 19:25:44 crc kubenswrapper[4828]: E1205 19:25:44.110349 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a89115ab-f300-433f-934e-dce679bf1877" containerName="neutron-httpd" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.110357 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="a89115ab-f300-433f-934e-dce679bf1877" containerName="neutron-httpd" Dec 05 19:25:44 crc kubenswrapper[4828]: E1205 19:25:44.110369 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b578dfb2-8a7f-420d-a503-d2eac607b648" containerName="glance-httpd" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.110377 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="b578dfb2-8a7f-420d-a503-d2eac607b648" containerName="glance-httpd" Dec 05 19:25:44 crc kubenswrapper[4828]: E1205 19:25:44.110395 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b578dfb2-8a7f-420d-a503-d2eac607b648" containerName="glance-log" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.110402 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="b578dfb2-8a7f-420d-a503-d2eac607b648" containerName="glance-log" Dec 05 19:25:44 crc kubenswrapper[4828]: E1205 19:25:44.110424 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74df4612-463b-4b3c-8f2d-7dbb9494d6fe" containerName="horizon-log" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.110433 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="74df4612-463b-4b3c-8f2d-7dbb9494d6fe" containerName="horizon-log" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.110642 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="b578dfb2-8a7f-420d-a503-d2eac607b648" containerName="glance-log" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.110672 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="a89115ab-f300-433f-934e-dce679bf1877" containerName="neutron-httpd" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.110694 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="74df4612-463b-4b3c-8f2d-7dbb9494d6fe" containerName="horizon-log" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.110708 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="74df4612-463b-4b3c-8f2d-7dbb9494d6fe" containerName="horizon" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.110730 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="a89115ab-f300-433f-934e-dce679bf1877" containerName="neutron-api" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.110746 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="b578dfb2-8a7f-420d-a503-d2eac607b648" containerName="glance-httpd" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.111998 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.114065 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.118176 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.148643 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-699b69c564-442lb"] Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.163321 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.205193 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7126f93-6b58-41ae-8f7a-b86281398e90-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a7126f93-6b58-41ae-8f7a-b86281398e90\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.205296 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7126f93-6b58-41ae-8f7a-b86281398e90-logs\") pod \"glance-default-internal-api-0\" (UID: \"a7126f93-6b58-41ae-8f7a-b86281398e90\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.205365 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7126f93-6b58-41ae-8f7a-b86281398e90-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a7126f93-6b58-41ae-8f7a-b86281398e90\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.205394 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"a7126f93-6b58-41ae-8f7a-b86281398e90\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.205428 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7126f93-6b58-41ae-8f7a-b86281398e90-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a7126f93-6b58-41ae-8f7a-b86281398e90\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.205469 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7126f93-6b58-41ae-8f7a-b86281398e90-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a7126f93-6b58-41ae-8f7a-b86281398e90\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.205525 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7126f93-6b58-41ae-8f7a-b86281398e90-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a7126f93-6b58-41ae-8f7a-b86281398e90\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.205559 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzzx6\" (UniqueName: \"kubernetes.io/projected/a7126f93-6b58-41ae-8f7a-b86281398e90-kube-api-access-xzzx6\") pod \"glance-default-internal-api-0\" (UID: \"a7126f93-6b58-41ae-8f7a-b86281398e90\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.256600 4828 scope.go:117] "RemoveContainer" containerID="389a2d18b31e186a7ee496a4e107afa7e6779b1f1e859afe1edb6a6e9265ec50" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.280289 4828 scope.go:117] "RemoveContainer" containerID="fa40e13fba0df0483b771e91be21b66eca689a8961b6c27a37b0b215fbdc56b9" Dec 05 19:25:44 crc kubenswrapper[4828]: E1205 19:25:44.281000 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa40e13fba0df0483b771e91be21b66eca689a8961b6c27a37b0b215fbdc56b9\": container with ID starting with fa40e13fba0df0483b771e91be21b66eca689a8961b6c27a37b0b215fbdc56b9 not found: ID does not exist" containerID="fa40e13fba0df0483b771e91be21b66eca689a8961b6c27a37b0b215fbdc56b9" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.281030 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa40e13fba0df0483b771e91be21b66eca689a8961b6c27a37b0b215fbdc56b9"} err="failed to get container status \"fa40e13fba0df0483b771e91be21b66eca689a8961b6c27a37b0b215fbdc56b9\": rpc error: code = NotFound desc = could not find container \"fa40e13fba0df0483b771e91be21b66eca689a8961b6c27a37b0b215fbdc56b9\": container with ID starting with fa40e13fba0df0483b771e91be21b66eca689a8961b6c27a37b0b215fbdc56b9 not found: ID does not exist" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.281051 4828 scope.go:117] "RemoveContainer" containerID="389a2d18b31e186a7ee496a4e107afa7e6779b1f1e859afe1edb6a6e9265ec50" Dec 05 19:25:44 crc kubenswrapper[4828]: E1205 19:25:44.281419 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"389a2d18b31e186a7ee496a4e107afa7e6779b1f1e859afe1edb6a6e9265ec50\": container with ID starting with 389a2d18b31e186a7ee496a4e107afa7e6779b1f1e859afe1edb6a6e9265ec50 not found: ID does not exist" containerID="389a2d18b31e186a7ee496a4e107afa7e6779b1f1e859afe1edb6a6e9265ec50" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.281441 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"389a2d18b31e186a7ee496a4e107afa7e6779b1f1e859afe1edb6a6e9265ec50"} err="failed to get container status \"389a2d18b31e186a7ee496a4e107afa7e6779b1f1e859afe1edb6a6e9265ec50\": rpc error: code = NotFound desc = could not find container \"389a2d18b31e186a7ee496a4e107afa7e6779b1f1e859afe1edb6a6e9265ec50\": container with ID starting with 389a2d18b31e186a7ee496a4e107afa7e6779b1f1e859afe1edb6a6e9265ec50 not found: ID does not exist" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.306574 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7126f93-6b58-41ae-8f7a-b86281398e90-logs\") pod \"glance-default-internal-api-0\" (UID: \"a7126f93-6b58-41ae-8f7a-b86281398e90\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.306645 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7126f93-6b58-41ae-8f7a-b86281398e90-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a7126f93-6b58-41ae-8f7a-b86281398e90\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.306671 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"a7126f93-6b58-41ae-8f7a-b86281398e90\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.306694 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7126f93-6b58-41ae-8f7a-b86281398e90-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a7126f93-6b58-41ae-8f7a-b86281398e90\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.306719 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7126f93-6b58-41ae-8f7a-b86281398e90-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a7126f93-6b58-41ae-8f7a-b86281398e90\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.306757 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7126f93-6b58-41ae-8f7a-b86281398e90-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a7126f93-6b58-41ae-8f7a-b86281398e90\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.306779 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzzx6\" (UniqueName: \"kubernetes.io/projected/a7126f93-6b58-41ae-8f7a-b86281398e90-kube-api-access-xzzx6\") pod \"glance-default-internal-api-0\" (UID: \"a7126f93-6b58-41ae-8f7a-b86281398e90\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.306832 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7126f93-6b58-41ae-8f7a-b86281398e90-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a7126f93-6b58-41ae-8f7a-b86281398e90\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.306864 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"a7126f93-6b58-41ae-8f7a-b86281398e90\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.307409 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7126f93-6b58-41ae-8f7a-b86281398e90-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a7126f93-6b58-41ae-8f7a-b86281398e90\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.307447 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7126f93-6b58-41ae-8f7a-b86281398e90-logs\") pod \"glance-default-internal-api-0\" (UID: \"a7126f93-6b58-41ae-8f7a-b86281398e90\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.312193 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7126f93-6b58-41ae-8f7a-b86281398e90-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a7126f93-6b58-41ae-8f7a-b86281398e90\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.312556 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7126f93-6b58-41ae-8f7a-b86281398e90-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a7126f93-6b58-41ae-8f7a-b86281398e90\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.313028 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7126f93-6b58-41ae-8f7a-b86281398e90-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a7126f93-6b58-41ae-8f7a-b86281398e90\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.325895 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7126f93-6b58-41ae-8f7a-b86281398e90-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a7126f93-6b58-41ae-8f7a-b86281398e90\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.336338 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"a7126f93-6b58-41ae-8f7a-b86281398e90\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.350318 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzzx6\" (UniqueName: \"kubernetes.io/projected/a7126f93-6b58-41ae-8f7a-b86281398e90-kube-api-access-xzzx6\") pod \"glance-default-internal-api-0\" (UID: \"a7126f93-6b58-41ae-8f7a-b86281398e90\") " pod="openstack/glance-default-internal-api-0" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.435325 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.457159 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74df4612-463b-4b3c-8f2d-7dbb9494d6fe" path="/var/lib/kubelet/pods/74df4612-463b-4b3c-8f2d-7dbb9494d6fe/volumes" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.458228 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a89115ab-f300-433f-934e-dce679bf1877" path="/var/lib/kubelet/pods/a89115ab-f300-433f-934e-dce679bf1877/volumes" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.459067 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b578dfb2-8a7f-420d-a503-d2eac607b648" path="/var/lib/kubelet/pods/b578dfb2-8a7f-420d-a503-d2eac607b648/volumes" Dec 05 19:25:44 crc kubenswrapper[4828]: I1205 19:25:44.963955 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 19:25:45 crc kubenswrapper[4828]: I1205 19:25:45.044660 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a7126f93-6b58-41ae-8f7a-b86281398e90","Type":"ContainerStarted","Data":"9572fe4a519663bbe9fd1b635d6d9f4fbb125480817a9a7682869976e70e10a8"} Dec 05 19:25:45 crc kubenswrapper[4828]: I1205 19:25:45.117928 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:25:45 crc kubenswrapper[4828]: I1205 19:25:45.117992 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:25:45 crc kubenswrapper[4828]: I1205 19:25:45.118715 4828 scope.go:117] "RemoveContainer" containerID="430af8e018b4db94e5fbc1658ab5c48af8bdcbbed4d9e9f4a8b1c4d49b774c99" Dec 05 19:25:45 crc kubenswrapper[4828]: E1205 19:25:45.119067 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:25:46 crc kubenswrapper[4828]: I1205 19:25:46.064769 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a7126f93-6b58-41ae-8f7a-b86281398e90","Type":"ContainerStarted","Data":"71fb9c1f49c3cff4e35df159f9993c3887f5a0a8f9d9d2179acf6e4d3da1ff2b"} Dec 05 19:25:47 crc kubenswrapper[4828]: I1205 19:25:47.075792 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a7126f93-6b58-41ae-8f7a-b86281398e90","Type":"ContainerStarted","Data":"66013ec3a99a11b506b61740042056877d0c1a171e5e7aecc7af2336e3747bbf"} Dec 05 19:25:47 crc kubenswrapper[4828]: I1205 19:25:47.106659 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.106632721 podStartE2EDuration="3.106632721s" podCreationTimestamp="2025-12-05 19:25:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:25:47.097262857 +0000 UTC m=+1324.992485163" watchObservedRunningTime="2025-12-05 19:25:47.106632721 +0000 UTC m=+1325.001855027" Dec 05 19:25:49 crc kubenswrapper[4828]: I1205 19:25:49.105923 4828 generic.go:334] "Generic (PLEG): container finished" podID="e71df0c2-5325-4ded-8ba0-691757e3c7e3" containerID="4de21da88053854416fd49ea8a02fa6f7a9e864abee6f0019f83f3d3df5e1398" exitCode=0 Dec 05 19:25:49 crc kubenswrapper[4828]: I1205 19:25:49.106027 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e71df0c2-5325-4ded-8ba0-691757e3c7e3","Type":"ContainerDied","Data":"4de21da88053854416fd49ea8a02fa6f7a9e864abee6f0019f83f3d3df5e1398"} Dec 05 19:25:49 crc kubenswrapper[4828]: I1205 19:25:49.442002 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 19:25:49 crc kubenswrapper[4828]: I1205 19:25:49.504755 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e71df0c2-5325-4ded-8ba0-691757e3c7e3-combined-ca-bundle\") pod \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\" (UID: \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\") " Dec 05 19:25:49 crc kubenswrapper[4828]: I1205 19:25:49.504905 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e71df0c2-5325-4ded-8ba0-691757e3c7e3-scripts\") pod \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\" (UID: \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\") " Dec 05 19:25:49 crc kubenswrapper[4828]: I1205 19:25:49.505009 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28ln8\" (UniqueName: \"kubernetes.io/projected/e71df0c2-5325-4ded-8ba0-691757e3c7e3-kube-api-access-28ln8\") pod \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\" (UID: \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\") " Dec 05 19:25:49 crc kubenswrapper[4828]: I1205 19:25:49.505039 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e71df0c2-5325-4ded-8ba0-691757e3c7e3-log-httpd\") pod \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\" (UID: \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\") " Dec 05 19:25:49 crc kubenswrapper[4828]: I1205 19:25:49.505087 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e71df0c2-5325-4ded-8ba0-691757e3c7e3-run-httpd\") pod \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\" (UID: \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\") " Dec 05 19:25:49 crc kubenswrapper[4828]: I1205 19:25:49.505112 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e71df0c2-5325-4ded-8ba0-691757e3c7e3-config-data\") pod \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\" (UID: \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\") " Dec 05 19:25:49 crc kubenswrapper[4828]: I1205 19:25:49.505174 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e71df0c2-5325-4ded-8ba0-691757e3c7e3-sg-core-conf-yaml\") pod \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\" (UID: \"e71df0c2-5325-4ded-8ba0-691757e3c7e3\") " Dec 05 19:25:49 crc kubenswrapper[4828]: I1205 19:25:49.509931 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e71df0c2-5325-4ded-8ba0-691757e3c7e3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e71df0c2-5325-4ded-8ba0-691757e3c7e3" (UID: "e71df0c2-5325-4ded-8ba0-691757e3c7e3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:25:49 crc kubenswrapper[4828]: I1205 19:25:49.510242 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e71df0c2-5325-4ded-8ba0-691757e3c7e3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e71df0c2-5325-4ded-8ba0-691757e3c7e3" (UID: "e71df0c2-5325-4ded-8ba0-691757e3c7e3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:25:49 crc kubenswrapper[4828]: I1205 19:25:49.515396 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e71df0c2-5325-4ded-8ba0-691757e3c7e3-kube-api-access-28ln8" (OuterVolumeSpecName: "kube-api-access-28ln8") pod "e71df0c2-5325-4ded-8ba0-691757e3c7e3" (UID: "e71df0c2-5325-4ded-8ba0-691757e3c7e3"). InnerVolumeSpecName "kube-api-access-28ln8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:25:49 crc kubenswrapper[4828]: I1205 19:25:49.516172 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e71df0c2-5325-4ded-8ba0-691757e3c7e3-scripts" (OuterVolumeSpecName: "scripts") pod "e71df0c2-5325-4ded-8ba0-691757e3c7e3" (UID: "e71df0c2-5325-4ded-8ba0-691757e3c7e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:49 crc kubenswrapper[4828]: I1205 19:25:49.536625 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e71df0c2-5325-4ded-8ba0-691757e3c7e3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e71df0c2-5325-4ded-8ba0-691757e3c7e3" (UID: "e71df0c2-5325-4ded-8ba0-691757e3c7e3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:49 crc kubenswrapper[4828]: I1205 19:25:49.583535 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e71df0c2-5325-4ded-8ba0-691757e3c7e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e71df0c2-5325-4ded-8ba0-691757e3c7e3" (UID: "e71df0c2-5325-4ded-8ba0-691757e3c7e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:49 crc kubenswrapper[4828]: I1205 19:25:49.608171 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e71df0c2-5325-4ded-8ba0-691757e3c7e3-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:49 crc kubenswrapper[4828]: I1205 19:25:49.608222 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28ln8\" (UniqueName: \"kubernetes.io/projected/e71df0c2-5325-4ded-8ba0-691757e3c7e3-kube-api-access-28ln8\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:49 crc kubenswrapper[4828]: I1205 19:25:49.608242 4828 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e71df0c2-5325-4ded-8ba0-691757e3c7e3-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:49 crc kubenswrapper[4828]: I1205 19:25:49.608257 4828 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e71df0c2-5325-4ded-8ba0-691757e3c7e3-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:49 crc kubenswrapper[4828]: I1205 19:25:49.608268 4828 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e71df0c2-5325-4ded-8ba0-691757e3c7e3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:49 crc kubenswrapper[4828]: I1205 19:25:49.608280 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e71df0c2-5325-4ded-8ba0-691757e3c7e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:49 crc kubenswrapper[4828]: I1205 19:25:49.616245 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e71df0c2-5325-4ded-8ba0-691757e3c7e3-config-data" (OuterVolumeSpecName: "config-data") pod "e71df0c2-5325-4ded-8ba0-691757e3c7e3" (UID: "e71df0c2-5325-4ded-8ba0-691757e3c7e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:25:49 crc kubenswrapper[4828]: I1205 19:25:49.709754 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e71df0c2-5325-4ded-8ba0-691757e3c7e3-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.119320 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e71df0c2-5325-4ded-8ba0-691757e3c7e3","Type":"ContainerDied","Data":"c53d461f908bbb208662bc6cc2e126509abc3552fb3659c34480e22b59d77525"} Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.119375 4828 scope.go:117] "RemoveContainer" containerID="4cf5462488e7a535a2e168288d55a064c1c1e7cbb139cdc56b30de2e042c3d55" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.119384 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.170632 4828 scope.go:117] "RemoveContainer" containerID="5bba3920c3855a135a0b07a62cc32e16a5f05e17f2cc0fcbe9ddbf6c125875fb" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.173727 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.183477 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.195580 4828 scope.go:117] "RemoveContainer" containerID="69475276894d4479c70f86f7df3e145f803bea1e40a9068fc78858425e368d78" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.208130 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:25:50 crc kubenswrapper[4828]: E1205 19:25:50.208942 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e71df0c2-5325-4ded-8ba0-691757e3c7e3" containerName="ceilometer-notification-agent" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.209004 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="e71df0c2-5325-4ded-8ba0-691757e3c7e3" containerName="ceilometer-notification-agent" Dec 05 19:25:50 crc kubenswrapper[4828]: E1205 19:25:50.209026 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e71df0c2-5325-4ded-8ba0-691757e3c7e3" containerName="sg-core" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.209033 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="e71df0c2-5325-4ded-8ba0-691757e3c7e3" containerName="sg-core" Dec 05 19:25:50 crc kubenswrapper[4828]: E1205 19:25:50.209082 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e71df0c2-5325-4ded-8ba0-691757e3c7e3" containerName="proxy-httpd" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.209092 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="e71df0c2-5325-4ded-8ba0-691757e3c7e3" containerName="proxy-httpd" Dec 05 19:25:50 crc kubenswrapper[4828]: E1205 19:25:50.209106 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e71df0c2-5325-4ded-8ba0-691757e3c7e3" containerName="ceilometer-central-agent" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.209113 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="e71df0c2-5325-4ded-8ba0-691757e3c7e3" containerName="ceilometer-central-agent" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.209361 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="e71df0c2-5325-4ded-8ba0-691757e3c7e3" containerName="ceilometer-central-agent" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.209398 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="e71df0c2-5325-4ded-8ba0-691757e3c7e3" containerName="ceilometer-notification-agent" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.209415 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="e71df0c2-5325-4ded-8ba0-691757e3c7e3" containerName="sg-core" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.209426 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="e71df0c2-5325-4ded-8ba0-691757e3c7e3" containerName="proxy-httpd" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.212777 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.217348 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.225511 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.236852 4828 scope.go:117] "RemoveContainer" containerID="4de21da88053854416fd49ea8a02fa6f7a9e864abee6f0019f83f3d3df5e1398" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.241586 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.325446 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1875a281-b41f-438e-830f-07a6df73e768-log-httpd\") pod \"ceilometer-0\" (UID: \"1875a281-b41f-438e-830f-07a6df73e768\") " pod="openstack/ceilometer-0" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.325499 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1875a281-b41f-438e-830f-07a6df73e768-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1875a281-b41f-438e-830f-07a6df73e768\") " pod="openstack/ceilometer-0" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.325551 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1875a281-b41f-438e-830f-07a6df73e768-scripts\") pod \"ceilometer-0\" (UID: \"1875a281-b41f-438e-830f-07a6df73e768\") " pod="openstack/ceilometer-0" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.325848 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9kzw\" (UniqueName: \"kubernetes.io/projected/1875a281-b41f-438e-830f-07a6df73e768-kube-api-access-h9kzw\") pod \"ceilometer-0\" (UID: \"1875a281-b41f-438e-830f-07a6df73e768\") " pod="openstack/ceilometer-0" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.325973 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1875a281-b41f-438e-830f-07a6df73e768-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1875a281-b41f-438e-830f-07a6df73e768\") " pod="openstack/ceilometer-0" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.326070 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1875a281-b41f-438e-830f-07a6df73e768-run-httpd\") pod \"ceilometer-0\" (UID: \"1875a281-b41f-438e-830f-07a6df73e768\") " pod="openstack/ceilometer-0" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.326116 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1875a281-b41f-438e-830f-07a6df73e768-config-data\") pod \"ceilometer-0\" (UID: \"1875a281-b41f-438e-830f-07a6df73e768\") " pod="openstack/ceilometer-0" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.427885 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1875a281-b41f-438e-830f-07a6df73e768-log-httpd\") pod \"ceilometer-0\" (UID: \"1875a281-b41f-438e-830f-07a6df73e768\") " pod="openstack/ceilometer-0" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.427944 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1875a281-b41f-438e-830f-07a6df73e768-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1875a281-b41f-438e-830f-07a6df73e768\") " pod="openstack/ceilometer-0" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.427987 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1875a281-b41f-438e-830f-07a6df73e768-scripts\") pod \"ceilometer-0\" (UID: \"1875a281-b41f-438e-830f-07a6df73e768\") " pod="openstack/ceilometer-0" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.428061 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9kzw\" (UniqueName: \"kubernetes.io/projected/1875a281-b41f-438e-830f-07a6df73e768-kube-api-access-h9kzw\") pod \"ceilometer-0\" (UID: \"1875a281-b41f-438e-830f-07a6df73e768\") " pod="openstack/ceilometer-0" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.428110 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1875a281-b41f-438e-830f-07a6df73e768-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1875a281-b41f-438e-830f-07a6df73e768\") " pod="openstack/ceilometer-0" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.428135 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1875a281-b41f-438e-830f-07a6df73e768-run-httpd\") pod \"ceilometer-0\" (UID: \"1875a281-b41f-438e-830f-07a6df73e768\") " pod="openstack/ceilometer-0" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.428170 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1875a281-b41f-438e-830f-07a6df73e768-config-data\") pod \"ceilometer-0\" (UID: \"1875a281-b41f-438e-830f-07a6df73e768\") " pod="openstack/ceilometer-0" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.428407 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1875a281-b41f-438e-830f-07a6df73e768-log-httpd\") pod \"ceilometer-0\" (UID: \"1875a281-b41f-438e-830f-07a6df73e768\") " pod="openstack/ceilometer-0" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.428789 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1875a281-b41f-438e-830f-07a6df73e768-run-httpd\") pod \"ceilometer-0\" (UID: \"1875a281-b41f-438e-830f-07a6df73e768\") " pod="openstack/ceilometer-0" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.432799 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1875a281-b41f-438e-830f-07a6df73e768-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1875a281-b41f-438e-830f-07a6df73e768\") " pod="openstack/ceilometer-0" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.434818 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1875a281-b41f-438e-830f-07a6df73e768-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1875a281-b41f-438e-830f-07a6df73e768\") " pod="openstack/ceilometer-0" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.451257 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1875a281-b41f-438e-830f-07a6df73e768-config-data\") pod \"ceilometer-0\" (UID: \"1875a281-b41f-438e-830f-07a6df73e768\") " pod="openstack/ceilometer-0" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.451718 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1875a281-b41f-438e-830f-07a6df73e768-scripts\") pod \"ceilometer-0\" (UID: \"1875a281-b41f-438e-830f-07a6df73e768\") " pod="openstack/ceilometer-0" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.456861 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9kzw\" (UniqueName: \"kubernetes.io/projected/1875a281-b41f-438e-830f-07a6df73e768-kube-api-access-h9kzw\") pod \"ceilometer-0\" (UID: \"1875a281-b41f-438e-830f-07a6df73e768\") " pod="openstack/ceilometer-0" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.463552 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e71df0c2-5325-4ded-8ba0-691757e3c7e3" path="/var/lib/kubelet/pods/e71df0c2-5325-4ded-8ba0-691757e3c7e3/volumes" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.540685 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 19:25:50 crc kubenswrapper[4828]: I1205 19:25:50.807802 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:25:50 crc kubenswrapper[4828]: W1205 19:25:50.816969 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1875a281_b41f_438e_830f_07a6df73e768.slice/crio-7007d2b2e1f304c10fed6dbf5db61c98f5f58bf861ac4c10bbc028c536f80f70 WatchSource:0}: Error finding container 7007d2b2e1f304c10fed6dbf5db61c98f5f58bf861ac4c10bbc028c536f80f70: Status 404 returned error can't find the container with id 7007d2b2e1f304c10fed6dbf5db61c98f5f58bf861ac4c10bbc028c536f80f70 Dec 05 19:25:51 crc kubenswrapper[4828]: I1205 19:25:51.134874 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1875a281-b41f-438e-830f-07a6df73e768","Type":"ContainerStarted","Data":"7007d2b2e1f304c10fed6dbf5db61c98f5f58bf861ac4c10bbc028c536f80f70"} Dec 05 19:25:51 crc kubenswrapper[4828]: I1205 19:25:51.565365 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 05 19:25:51 crc kubenswrapper[4828]: I1205 19:25:51.565660 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 05 19:25:51 crc kubenswrapper[4828]: I1205 19:25:51.603373 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 05 19:25:51 crc kubenswrapper[4828]: I1205 19:25:51.626430 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 05 19:25:52 crc kubenswrapper[4828]: I1205 19:25:52.146728 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 05 19:25:52 crc kubenswrapper[4828]: I1205 19:25:52.146804 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 05 19:25:53 crc kubenswrapper[4828]: I1205 19:25:53.990446 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 05 19:25:53 crc kubenswrapper[4828]: I1205 19:25:53.995382 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 05 19:25:54 crc kubenswrapper[4828]: I1205 19:25:54.436415 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 05 19:25:54 crc kubenswrapper[4828]: I1205 19:25:54.436625 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 05 19:25:54 crc kubenswrapper[4828]: I1205 19:25:54.480027 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 05 19:25:54 crc kubenswrapper[4828]: I1205 19:25:54.499668 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 05 19:25:55 crc kubenswrapper[4828]: I1205 19:25:55.173629 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1875a281-b41f-438e-830f-07a6df73e768","Type":"ContainerStarted","Data":"ab411b7b608ce5cfd3eb0a9a00b5e7355bfce634252a14a3c75ecd4700ab30f1"} Dec 05 19:25:55 crc kubenswrapper[4828]: I1205 19:25:55.174241 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 05 19:25:55 crc kubenswrapper[4828]: I1205 19:25:55.174343 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 05 19:25:56 crc kubenswrapper[4828]: I1205 19:25:56.182968 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1875a281-b41f-438e-830f-07a6df73e768","Type":"ContainerStarted","Data":"de53c335471780385f7a3dd6f7e62381efacfb12c9b48e51abaab6873dab9707"} Dec 05 19:25:56 crc kubenswrapper[4828]: I1205 19:25:56.446896 4828 scope.go:117] "RemoveContainer" containerID="430af8e018b4db94e5fbc1658ab5c48af8bdcbbed4d9e9f4a8b1c4d49b774c99" Dec 05 19:25:57 crc kubenswrapper[4828]: I1205 19:25:57.152088 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 05 19:25:57 crc kubenswrapper[4828]: I1205 19:25:57.153907 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 05 19:25:57 crc kubenswrapper[4828]: I1205 19:25:57.198943 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" event={"ID":"03c4fc5d-6be1-47b4-9c39-7bb86046dafd","Type":"ContainerStarted","Data":"d000229fa1db508cef366e145d044d5816652c2a9c5bba1cd918b2052aa0438a"} Dec 05 19:25:57 crc kubenswrapper[4828]: I1205 19:25:57.199168 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:25:57 crc kubenswrapper[4828]: I1205 19:25:57.204236 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1875a281-b41f-438e-830f-07a6df73e768","Type":"ContainerStarted","Data":"5623c0d2f2f5fbb130b42c1e5d7e76d4374b7c1481790622d3b9f8c1f0846b0b"} Dec 05 19:25:58 crc kubenswrapper[4828]: I1205 19:25:58.217540 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1875a281-b41f-438e-830f-07a6df73e768","Type":"ContainerStarted","Data":"9bee79ad533e303df1ad36800a8b0f5d9e99a2134cffcb1afcfd595bfe1233ed"} Dec 05 19:25:58 crc kubenswrapper[4828]: I1205 19:25:58.218214 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 19:25:58 crc kubenswrapper[4828]: I1205 19:25:58.243308 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.066038604 podStartE2EDuration="8.243293973s" podCreationTimestamp="2025-12-05 19:25:50 +0000 UTC" firstStartedPulling="2025-12-05 19:25:50.819893486 +0000 UTC m=+1328.715115792" lastFinishedPulling="2025-12-05 19:25:57.997148855 +0000 UTC m=+1335.892371161" observedRunningTime="2025-12-05 19:25:58.242334926 +0000 UTC m=+1336.137557242" watchObservedRunningTime="2025-12-05 19:25:58.243293973 +0000 UTC m=+1336.138516279" Dec 05 19:26:05 crc kubenswrapper[4828]: I1205 19:26:05.124164 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:26:05 crc kubenswrapper[4828]: I1205 19:26:05.259466 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:26:05 crc kubenswrapper[4828]: I1205 19:26:05.259531 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:26:18 crc kubenswrapper[4828]: I1205 19:26:18.226447 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hlzjj"] Dec 05 19:26:18 crc kubenswrapper[4828]: I1205 19:26:18.228290 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hlzjj" Dec 05 19:26:18 crc kubenswrapper[4828]: I1205 19:26:18.230432 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8qzs4" Dec 05 19:26:18 crc kubenswrapper[4828]: I1205 19:26:18.230583 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 05 19:26:18 crc kubenswrapper[4828]: I1205 19:26:18.241171 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 05 19:26:18 crc kubenswrapper[4828]: I1205 19:26:18.254622 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hlzjj"] Dec 05 19:26:18 crc kubenswrapper[4828]: I1205 19:26:18.301831 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67a695c-a0d3-46ac-ad8f-b90c17732e01-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hlzjj\" (UID: \"a67a695c-a0d3-46ac-ad8f-b90c17732e01\") " pod="openstack/nova-cell0-conductor-db-sync-hlzjj" Dec 05 19:26:18 crc kubenswrapper[4828]: I1205 19:26:18.302054 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67a695c-a0d3-46ac-ad8f-b90c17732e01-config-data\") pod \"nova-cell0-conductor-db-sync-hlzjj\" (UID: \"a67a695c-a0d3-46ac-ad8f-b90c17732e01\") " pod="openstack/nova-cell0-conductor-db-sync-hlzjj" Dec 05 19:26:18 crc kubenswrapper[4828]: I1205 19:26:18.302232 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w58hx\" (UniqueName: \"kubernetes.io/projected/a67a695c-a0d3-46ac-ad8f-b90c17732e01-kube-api-access-w58hx\") pod \"nova-cell0-conductor-db-sync-hlzjj\" (UID: \"a67a695c-a0d3-46ac-ad8f-b90c17732e01\") " pod="openstack/nova-cell0-conductor-db-sync-hlzjj" Dec 05 19:26:18 crc kubenswrapper[4828]: I1205 19:26:18.302325 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a67a695c-a0d3-46ac-ad8f-b90c17732e01-scripts\") pod \"nova-cell0-conductor-db-sync-hlzjj\" (UID: \"a67a695c-a0d3-46ac-ad8f-b90c17732e01\") " pod="openstack/nova-cell0-conductor-db-sync-hlzjj" Dec 05 19:26:18 crc kubenswrapper[4828]: I1205 19:26:18.404432 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a67a695c-a0d3-46ac-ad8f-b90c17732e01-scripts\") pod \"nova-cell0-conductor-db-sync-hlzjj\" (UID: \"a67a695c-a0d3-46ac-ad8f-b90c17732e01\") " pod="openstack/nova-cell0-conductor-db-sync-hlzjj" Dec 05 19:26:18 crc kubenswrapper[4828]: I1205 19:26:18.404544 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67a695c-a0d3-46ac-ad8f-b90c17732e01-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hlzjj\" (UID: \"a67a695c-a0d3-46ac-ad8f-b90c17732e01\") " pod="openstack/nova-cell0-conductor-db-sync-hlzjj" Dec 05 19:26:18 crc kubenswrapper[4828]: I1205 19:26:18.404622 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67a695c-a0d3-46ac-ad8f-b90c17732e01-config-data\") pod \"nova-cell0-conductor-db-sync-hlzjj\" (UID: \"a67a695c-a0d3-46ac-ad8f-b90c17732e01\") " pod="openstack/nova-cell0-conductor-db-sync-hlzjj" Dec 05 19:26:18 crc kubenswrapper[4828]: I1205 19:26:18.404690 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w58hx\" (UniqueName: \"kubernetes.io/projected/a67a695c-a0d3-46ac-ad8f-b90c17732e01-kube-api-access-w58hx\") pod \"nova-cell0-conductor-db-sync-hlzjj\" (UID: \"a67a695c-a0d3-46ac-ad8f-b90c17732e01\") " pod="openstack/nova-cell0-conductor-db-sync-hlzjj" Dec 05 19:26:18 crc kubenswrapper[4828]: I1205 19:26:18.410978 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a67a695c-a0d3-46ac-ad8f-b90c17732e01-scripts\") pod \"nova-cell0-conductor-db-sync-hlzjj\" (UID: \"a67a695c-a0d3-46ac-ad8f-b90c17732e01\") " pod="openstack/nova-cell0-conductor-db-sync-hlzjj" Dec 05 19:26:18 crc kubenswrapper[4828]: I1205 19:26:18.413037 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67a695c-a0d3-46ac-ad8f-b90c17732e01-config-data\") pod \"nova-cell0-conductor-db-sync-hlzjj\" (UID: \"a67a695c-a0d3-46ac-ad8f-b90c17732e01\") " pod="openstack/nova-cell0-conductor-db-sync-hlzjj" Dec 05 19:26:18 crc kubenswrapper[4828]: I1205 19:26:18.419128 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67a695c-a0d3-46ac-ad8f-b90c17732e01-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hlzjj\" (UID: \"a67a695c-a0d3-46ac-ad8f-b90c17732e01\") " pod="openstack/nova-cell0-conductor-db-sync-hlzjj" Dec 05 19:26:18 crc kubenswrapper[4828]: I1205 19:26:18.424696 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w58hx\" (UniqueName: \"kubernetes.io/projected/a67a695c-a0d3-46ac-ad8f-b90c17732e01-kube-api-access-w58hx\") pod \"nova-cell0-conductor-db-sync-hlzjj\" (UID: \"a67a695c-a0d3-46ac-ad8f-b90c17732e01\") " pod="openstack/nova-cell0-conductor-db-sync-hlzjj" Dec 05 19:26:18 crc kubenswrapper[4828]: I1205 19:26:18.555496 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hlzjj" Dec 05 19:26:19 crc kubenswrapper[4828]: I1205 19:26:19.015290 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hlzjj"] Dec 05 19:26:19 crc kubenswrapper[4828]: I1205 19:26:19.466568 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hlzjj" event={"ID":"a67a695c-a0d3-46ac-ad8f-b90c17732e01","Type":"ContainerStarted","Data":"f8719692f3e2fc50fee6b1bf35b3c0431228becd0baa3ba599bec6266deea154"} Dec 05 19:26:20 crc kubenswrapper[4828]: I1205 19:26:20.546884 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 05 19:26:23 crc kubenswrapper[4828]: I1205 19:26:23.901959 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 19:26:23 crc kubenswrapper[4828]: I1205 19:26:23.902636 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="55a74937-57cf-442b-a1df-16a9df3b7948" containerName="kube-state-metrics" containerID="cri-o://059661c974b913a39686e79cd07b42429f83f1824e188d2290e8c5fd790a6dc4" gracePeriod=30 Dec 05 19:26:24 crc kubenswrapper[4828]: I1205 19:26:24.530131 4828 generic.go:334] "Generic (PLEG): container finished" podID="55a74937-57cf-442b-a1df-16a9df3b7948" containerID="059661c974b913a39686e79cd07b42429f83f1824e188d2290e8c5fd790a6dc4" exitCode=2 Dec 05 19:26:24 crc kubenswrapper[4828]: I1205 19:26:24.530228 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"55a74937-57cf-442b-a1df-16a9df3b7948","Type":"ContainerDied","Data":"059661c974b913a39686e79cd07b42429f83f1824e188d2290e8c5fd790a6dc4"} Dec 05 19:26:25 crc kubenswrapper[4828]: I1205 19:26:25.564132 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:26:25 crc kubenswrapper[4828]: I1205 19:26:25.564642 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1875a281-b41f-438e-830f-07a6df73e768" containerName="ceilometer-central-agent" containerID="cri-o://ab411b7b608ce5cfd3eb0a9a00b5e7355bfce634252a14a3c75ecd4700ab30f1" gracePeriod=30 Dec 05 19:26:25 crc kubenswrapper[4828]: I1205 19:26:25.564952 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1875a281-b41f-438e-830f-07a6df73e768" containerName="proxy-httpd" containerID="cri-o://9bee79ad533e303df1ad36800a8b0f5d9e99a2134cffcb1afcfd595bfe1233ed" gracePeriod=30 Dec 05 19:26:25 crc kubenswrapper[4828]: I1205 19:26:25.565041 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1875a281-b41f-438e-830f-07a6df73e768" containerName="ceilometer-notification-agent" containerID="cri-o://de53c335471780385f7a3dd6f7e62381efacfb12c9b48e51abaab6873dab9707" gracePeriod=30 Dec 05 19:26:25 crc kubenswrapper[4828]: I1205 19:26:25.565076 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1875a281-b41f-438e-830f-07a6df73e768" containerName="sg-core" containerID="cri-o://5623c0d2f2f5fbb130b42c1e5d7e76d4374b7c1481790622d3b9f8c1f0846b0b" gracePeriod=30 Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.414623 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.547954 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.553966 4828 generic.go:334] "Generic (PLEG): container finished" podID="1875a281-b41f-438e-830f-07a6df73e768" containerID="9bee79ad533e303df1ad36800a8b0f5d9e99a2134cffcb1afcfd595bfe1233ed" exitCode=0 Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.553988 4828 generic.go:334] "Generic (PLEG): container finished" podID="1875a281-b41f-438e-830f-07a6df73e768" containerID="5623c0d2f2f5fbb130b42c1e5d7e76d4374b7c1481790622d3b9f8c1f0846b0b" exitCode=2 Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.553996 4828 generic.go:334] "Generic (PLEG): container finished" podID="1875a281-b41f-438e-830f-07a6df73e768" containerID="de53c335471780385f7a3dd6f7e62381efacfb12c9b48e51abaab6873dab9707" exitCode=0 Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.554003 4828 generic.go:334] "Generic (PLEG): container finished" podID="1875a281-b41f-438e-830f-07a6df73e768" containerID="ab411b7b608ce5cfd3eb0a9a00b5e7355bfce634252a14a3c75ecd4700ab30f1" exitCode=0 Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.554051 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1875a281-b41f-438e-830f-07a6df73e768","Type":"ContainerDied","Data":"9bee79ad533e303df1ad36800a8b0f5d9e99a2134cffcb1afcfd595bfe1233ed"} Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.554074 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1875a281-b41f-438e-830f-07a6df73e768","Type":"ContainerDied","Data":"5623c0d2f2f5fbb130b42c1e5d7e76d4374b7c1481790622d3b9f8c1f0846b0b"} Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.554084 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1875a281-b41f-438e-830f-07a6df73e768","Type":"ContainerDied","Data":"de53c335471780385f7a3dd6f7e62381efacfb12c9b48e51abaab6873dab9707"} Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.554093 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1875a281-b41f-438e-830f-07a6df73e768","Type":"ContainerDied","Data":"ab411b7b608ce5cfd3eb0a9a00b5e7355bfce634252a14a3c75ecd4700ab30f1"} Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.554101 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1875a281-b41f-438e-830f-07a6df73e768","Type":"ContainerDied","Data":"7007d2b2e1f304c10fed6dbf5db61c98f5f58bf861ac4c10bbc028c536f80f70"} Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.554115 4828 scope.go:117] "RemoveContainer" containerID="9bee79ad533e303df1ad36800a8b0f5d9e99a2134cffcb1afcfd595bfe1233ed" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.558721 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"55a74937-57cf-442b-a1df-16a9df3b7948","Type":"ContainerDied","Data":"1feedb9eee9f148f6bfb1d22f2ffaa17fb94ff07d4de944e00e649644aa9ac3d"} Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.558802 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.560746 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m57s4\" (UniqueName: \"kubernetes.io/projected/55a74937-57cf-442b-a1df-16a9df3b7948-kube-api-access-m57s4\") pod \"55a74937-57cf-442b-a1df-16a9df3b7948\" (UID: \"55a74937-57cf-442b-a1df-16a9df3b7948\") " Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.568721 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55a74937-57cf-442b-a1df-16a9df3b7948-kube-api-access-m57s4" (OuterVolumeSpecName: "kube-api-access-m57s4") pod "55a74937-57cf-442b-a1df-16a9df3b7948" (UID: "55a74937-57cf-442b-a1df-16a9df3b7948"). InnerVolumeSpecName "kube-api-access-m57s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.585776 4828 scope.go:117] "RemoveContainer" containerID="5623c0d2f2f5fbb130b42c1e5d7e76d4374b7c1481790622d3b9f8c1f0846b0b" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.605846 4828 scope.go:117] "RemoveContainer" containerID="de53c335471780385f7a3dd6f7e62381efacfb12c9b48e51abaab6873dab9707" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.630068 4828 scope.go:117] "RemoveContainer" containerID="ab411b7b608ce5cfd3eb0a9a00b5e7355bfce634252a14a3c75ecd4700ab30f1" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.649727 4828 scope.go:117] "RemoveContainer" containerID="9bee79ad533e303df1ad36800a8b0f5d9e99a2134cffcb1afcfd595bfe1233ed" Dec 05 19:26:26 crc kubenswrapper[4828]: E1205 19:26:26.650218 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bee79ad533e303df1ad36800a8b0f5d9e99a2134cffcb1afcfd595bfe1233ed\": container with ID starting with 9bee79ad533e303df1ad36800a8b0f5d9e99a2134cffcb1afcfd595bfe1233ed not found: ID does not exist" containerID="9bee79ad533e303df1ad36800a8b0f5d9e99a2134cffcb1afcfd595bfe1233ed" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.650273 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bee79ad533e303df1ad36800a8b0f5d9e99a2134cffcb1afcfd595bfe1233ed"} err="failed to get container status \"9bee79ad533e303df1ad36800a8b0f5d9e99a2134cffcb1afcfd595bfe1233ed\": rpc error: code = NotFound desc = could not find container \"9bee79ad533e303df1ad36800a8b0f5d9e99a2134cffcb1afcfd595bfe1233ed\": container with ID starting with 9bee79ad533e303df1ad36800a8b0f5d9e99a2134cffcb1afcfd595bfe1233ed not found: ID does not exist" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.650311 4828 scope.go:117] "RemoveContainer" containerID="5623c0d2f2f5fbb130b42c1e5d7e76d4374b7c1481790622d3b9f8c1f0846b0b" Dec 05 19:26:26 crc kubenswrapper[4828]: E1205 19:26:26.650997 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5623c0d2f2f5fbb130b42c1e5d7e76d4374b7c1481790622d3b9f8c1f0846b0b\": container with ID starting with 5623c0d2f2f5fbb130b42c1e5d7e76d4374b7c1481790622d3b9f8c1f0846b0b not found: ID does not exist" containerID="5623c0d2f2f5fbb130b42c1e5d7e76d4374b7c1481790622d3b9f8c1f0846b0b" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.651028 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5623c0d2f2f5fbb130b42c1e5d7e76d4374b7c1481790622d3b9f8c1f0846b0b"} err="failed to get container status \"5623c0d2f2f5fbb130b42c1e5d7e76d4374b7c1481790622d3b9f8c1f0846b0b\": rpc error: code = NotFound desc = could not find container \"5623c0d2f2f5fbb130b42c1e5d7e76d4374b7c1481790622d3b9f8c1f0846b0b\": container with ID starting with 5623c0d2f2f5fbb130b42c1e5d7e76d4374b7c1481790622d3b9f8c1f0846b0b not found: ID does not exist" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.651051 4828 scope.go:117] "RemoveContainer" containerID="de53c335471780385f7a3dd6f7e62381efacfb12c9b48e51abaab6873dab9707" Dec 05 19:26:26 crc kubenswrapper[4828]: E1205 19:26:26.651378 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de53c335471780385f7a3dd6f7e62381efacfb12c9b48e51abaab6873dab9707\": container with ID starting with de53c335471780385f7a3dd6f7e62381efacfb12c9b48e51abaab6873dab9707 not found: ID does not exist" containerID="de53c335471780385f7a3dd6f7e62381efacfb12c9b48e51abaab6873dab9707" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.651418 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de53c335471780385f7a3dd6f7e62381efacfb12c9b48e51abaab6873dab9707"} err="failed to get container status \"de53c335471780385f7a3dd6f7e62381efacfb12c9b48e51abaab6873dab9707\": rpc error: code = NotFound desc = could not find container \"de53c335471780385f7a3dd6f7e62381efacfb12c9b48e51abaab6873dab9707\": container with ID starting with de53c335471780385f7a3dd6f7e62381efacfb12c9b48e51abaab6873dab9707 not found: ID does not exist" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.651445 4828 scope.go:117] "RemoveContainer" containerID="ab411b7b608ce5cfd3eb0a9a00b5e7355bfce634252a14a3c75ecd4700ab30f1" Dec 05 19:26:26 crc kubenswrapper[4828]: E1205 19:26:26.651744 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab411b7b608ce5cfd3eb0a9a00b5e7355bfce634252a14a3c75ecd4700ab30f1\": container with ID starting with ab411b7b608ce5cfd3eb0a9a00b5e7355bfce634252a14a3c75ecd4700ab30f1 not found: ID does not exist" containerID="ab411b7b608ce5cfd3eb0a9a00b5e7355bfce634252a14a3c75ecd4700ab30f1" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.651770 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab411b7b608ce5cfd3eb0a9a00b5e7355bfce634252a14a3c75ecd4700ab30f1"} err="failed to get container status \"ab411b7b608ce5cfd3eb0a9a00b5e7355bfce634252a14a3c75ecd4700ab30f1\": rpc error: code = NotFound desc = could not find container \"ab411b7b608ce5cfd3eb0a9a00b5e7355bfce634252a14a3c75ecd4700ab30f1\": container with ID starting with ab411b7b608ce5cfd3eb0a9a00b5e7355bfce634252a14a3c75ecd4700ab30f1 not found: ID does not exist" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.651786 4828 scope.go:117] "RemoveContainer" containerID="9bee79ad533e303df1ad36800a8b0f5d9e99a2134cffcb1afcfd595bfe1233ed" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.652013 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bee79ad533e303df1ad36800a8b0f5d9e99a2134cffcb1afcfd595bfe1233ed"} err="failed to get container status \"9bee79ad533e303df1ad36800a8b0f5d9e99a2134cffcb1afcfd595bfe1233ed\": rpc error: code = NotFound desc = could not find container \"9bee79ad533e303df1ad36800a8b0f5d9e99a2134cffcb1afcfd595bfe1233ed\": container with ID starting with 9bee79ad533e303df1ad36800a8b0f5d9e99a2134cffcb1afcfd595bfe1233ed not found: ID does not exist" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.652034 4828 scope.go:117] "RemoveContainer" containerID="5623c0d2f2f5fbb130b42c1e5d7e76d4374b7c1481790622d3b9f8c1f0846b0b" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.652273 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5623c0d2f2f5fbb130b42c1e5d7e76d4374b7c1481790622d3b9f8c1f0846b0b"} err="failed to get container status \"5623c0d2f2f5fbb130b42c1e5d7e76d4374b7c1481790622d3b9f8c1f0846b0b\": rpc error: code = NotFound desc = could not find container \"5623c0d2f2f5fbb130b42c1e5d7e76d4374b7c1481790622d3b9f8c1f0846b0b\": container with ID starting with 5623c0d2f2f5fbb130b42c1e5d7e76d4374b7c1481790622d3b9f8c1f0846b0b not found: ID does not exist" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.652291 4828 scope.go:117] "RemoveContainer" containerID="de53c335471780385f7a3dd6f7e62381efacfb12c9b48e51abaab6873dab9707" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.656118 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de53c335471780385f7a3dd6f7e62381efacfb12c9b48e51abaab6873dab9707"} err="failed to get container status \"de53c335471780385f7a3dd6f7e62381efacfb12c9b48e51abaab6873dab9707\": rpc error: code = NotFound desc = could not find container \"de53c335471780385f7a3dd6f7e62381efacfb12c9b48e51abaab6873dab9707\": container with ID starting with de53c335471780385f7a3dd6f7e62381efacfb12c9b48e51abaab6873dab9707 not found: ID does not exist" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.656138 4828 scope.go:117] "RemoveContainer" containerID="ab411b7b608ce5cfd3eb0a9a00b5e7355bfce634252a14a3c75ecd4700ab30f1" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.656416 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab411b7b608ce5cfd3eb0a9a00b5e7355bfce634252a14a3c75ecd4700ab30f1"} err="failed to get container status \"ab411b7b608ce5cfd3eb0a9a00b5e7355bfce634252a14a3c75ecd4700ab30f1\": rpc error: code = NotFound desc = could not find container \"ab411b7b608ce5cfd3eb0a9a00b5e7355bfce634252a14a3c75ecd4700ab30f1\": container with ID starting with ab411b7b608ce5cfd3eb0a9a00b5e7355bfce634252a14a3c75ecd4700ab30f1 not found: ID does not exist" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.656448 4828 scope.go:117] "RemoveContainer" containerID="9bee79ad533e303df1ad36800a8b0f5d9e99a2134cffcb1afcfd595bfe1233ed" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.658209 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bee79ad533e303df1ad36800a8b0f5d9e99a2134cffcb1afcfd595bfe1233ed"} err="failed to get container status \"9bee79ad533e303df1ad36800a8b0f5d9e99a2134cffcb1afcfd595bfe1233ed\": rpc error: code = NotFound desc = could not find container \"9bee79ad533e303df1ad36800a8b0f5d9e99a2134cffcb1afcfd595bfe1233ed\": container with ID starting with 9bee79ad533e303df1ad36800a8b0f5d9e99a2134cffcb1afcfd595bfe1233ed not found: ID does not exist" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.658232 4828 scope.go:117] "RemoveContainer" containerID="5623c0d2f2f5fbb130b42c1e5d7e76d4374b7c1481790622d3b9f8c1f0846b0b" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.658548 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5623c0d2f2f5fbb130b42c1e5d7e76d4374b7c1481790622d3b9f8c1f0846b0b"} err="failed to get container status \"5623c0d2f2f5fbb130b42c1e5d7e76d4374b7c1481790622d3b9f8c1f0846b0b\": rpc error: code = NotFound desc = could not find container \"5623c0d2f2f5fbb130b42c1e5d7e76d4374b7c1481790622d3b9f8c1f0846b0b\": container with ID starting with 5623c0d2f2f5fbb130b42c1e5d7e76d4374b7c1481790622d3b9f8c1f0846b0b not found: ID does not exist" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.658565 4828 scope.go:117] "RemoveContainer" containerID="de53c335471780385f7a3dd6f7e62381efacfb12c9b48e51abaab6873dab9707" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.658882 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de53c335471780385f7a3dd6f7e62381efacfb12c9b48e51abaab6873dab9707"} err="failed to get container status \"de53c335471780385f7a3dd6f7e62381efacfb12c9b48e51abaab6873dab9707\": rpc error: code = NotFound desc = could not find container \"de53c335471780385f7a3dd6f7e62381efacfb12c9b48e51abaab6873dab9707\": container with ID starting with de53c335471780385f7a3dd6f7e62381efacfb12c9b48e51abaab6873dab9707 not found: ID does not exist" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.658910 4828 scope.go:117] "RemoveContainer" containerID="ab411b7b608ce5cfd3eb0a9a00b5e7355bfce634252a14a3c75ecd4700ab30f1" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.659145 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab411b7b608ce5cfd3eb0a9a00b5e7355bfce634252a14a3c75ecd4700ab30f1"} err="failed to get container status \"ab411b7b608ce5cfd3eb0a9a00b5e7355bfce634252a14a3c75ecd4700ab30f1\": rpc error: code = NotFound desc = could not find container \"ab411b7b608ce5cfd3eb0a9a00b5e7355bfce634252a14a3c75ecd4700ab30f1\": container with ID starting with ab411b7b608ce5cfd3eb0a9a00b5e7355bfce634252a14a3c75ecd4700ab30f1 not found: ID does not exist" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.659165 4828 scope.go:117] "RemoveContainer" containerID="9bee79ad533e303df1ad36800a8b0f5d9e99a2134cffcb1afcfd595bfe1233ed" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.659360 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bee79ad533e303df1ad36800a8b0f5d9e99a2134cffcb1afcfd595bfe1233ed"} err="failed to get container status \"9bee79ad533e303df1ad36800a8b0f5d9e99a2134cffcb1afcfd595bfe1233ed\": rpc error: code = NotFound desc = could not find container \"9bee79ad533e303df1ad36800a8b0f5d9e99a2134cffcb1afcfd595bfe1233ed\": container with ID starting with 9bee79ad533e303df1ad36800a8b0f5d9e99a2134cffcb1afcfd595bfe1233ed not found: ID does not exist" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.659386 4828 scope.go:117] "RemoveContainer" containerID="5623c0d2f2f5fbb130b42c1e5d7e76d4374b7c1481790622d3b9f8c1f0846b0b" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.659559 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5623c0d2f2f5fbb130b42c1e5d7e76d4374b7c1481790622d3b9f8c1f0846b0b"} err="failed to get container status \"5623c0d2f2f5fbb130b42c1e5d7e76d4374b7c1481790622d3b9f8c1f0846b0b\": rpc error: code = NotFound desc = could not find container \"5623c0d2f2f5fbb130b42c1e5d7e76d4374b7c1481790622d3b9f8c1f0846b0b\": container with ID starting with 5623c0d2f2f5fbb130b42c1e5d7e76d4374b7c1481790622d3b9f8c1f0846b0b not found: ID does not exist" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.659578 4828 scope.go:117] "RemoveContainer" containerID="de53c335471780385f7a3dd6f7e62381efacfb12c9b48e51abaab6873dab9707" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.659754 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de53c335471780385f7a3dd6f7e62381efacfb12c9b48e51abaab6873dab9707"} err="failed to get container status \"de53c335471780385f7a3dd6f7e62381efacfb12c9b48e51abaab6873dab9707\": rpc error: code = NotFound desc = could not find container \"de53c335471780385f7a3dd6f7e62381efacfb12c9b48e51abaab6873dab9707\": container with ID starting with de53c335471780385f7a3dd6f7e62381efacfb12c9b48e51abaab6873dab9707 not found: ID does not exist" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.659777 4828 scope.go:117] "RemoveContainer" containerID="ab411b7b608ce5cfd3eb0a9a00b5e7355bfce634252a14a3c75ecd4700ab30f1" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.659956 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab411b7b608ce5cfd3eb0a9a00b5e7355bfce634252a14a3c75ecd4700ab30f1"} err="failed to get container status \"ab411b7b608ce5cfd3eb0a9a00b5e7355bfce634252a14a3c75ecd4700ab30f1\": rpc error: code = NotFound desc = could not find container \"ab411b7b608ce5cfd3eb0a9a00b5e7355bfce634252a14a3c75ecd4700ab30f1\": container with ID starting with ab411b7b608ce5cfd3eb0a9a00b5e7355bfce634252a14a3c75ecd4700ab30f1 not found: ID does not exist" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.659979 4828 scope.go:117] "RemoveContainer" containerID="059661c974b913a39686e79cd07b42429f83f1824e188d2290e8c5fd790a6dc4" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.662543 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1875a281-b41f-438e-830f-07a6df73e768-log-httpd\") pod \"1875a281-b41f-438e-830f-07a6df73e768\" (UID: \"1875a281-b41f-438e-830f-07a6df73e768\") " Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.662692 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1875a281-b41f-438e-830f-07a6df73e768-sg-core-conf-yaml\") pod \"1875a281-b41f-438e-830f-07a6df73e768\" (UID: \"1875a281-b41f-438e-830f-07a6df73e768\") " Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.662716 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1875a281-b41f-438e-830f-07a6df73e768-scripts\") pod \"1875a281-b41f-438e-830f-07a6df73e768\" (UID: \"1875a281-b41f-438e-830f-07a6df73e768\") " Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.662747 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1875a281-b41f-438e-830f-07a6df73e768-combined-ca-bundle\") pod \"1875a281-b41f-438e-830f-07a6df73e768\" (UID: \"1875a281-b41f-438e-830f-07a6df73e768\") " Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.662792 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9kzw\" (UniqueName: \"kubernetes.io/projected/1875a281-b41f-438e-830f-07a6df73e768-kube-api-access-h9kzw\") pod \"1875a281-b41f-438e-830f-07a6df73e768\" (UID: \"1875a281-b41f-438e-830f-07a6df73e768\") " Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.662859 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1875a281-b41f-438e-830f-07a6df73e768-config-data\") pod \"1875a281-b41f-438e-830f-07a6df73e768\" (UID: \"1875a281-b41f-438e-830f-07a6df73e768\") " Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.662876 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1875a281-b41f-438e-830f-07a6df73e768-run-httpd\") pod \"1875a281-b41f-438e-830f-07a6df73e768\" (UID: \"1875a281-b41f-438e-830f-07a6df73e768\") " Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.663075 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1875a281-b41f-438e-830f-07a6df73e768-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1875a281-b41f-438e-830f-07a6df73e768" (UID: "1875a281-b41f-438e-830f-07a6df73e768"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.663324 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1875a281-b41f-438e-830f-07a6df73e768-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1875a281-b41f-438e-830f-07a6df73e768" (UID: "1875a281-b41f-438e-830f-07a6df73e768"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.663728 4828 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1875a281-b41f-438e-830f-07a6df73e768-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.663751 4828 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1875a281-b41f-438e-830f-07a6df73e768-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.663764 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m57s4\" (UniqueName: \"kubernetes.io/projected/55a74937-57cf-442b-a1df-16a9df3b7948-kube-api-access-m57s4\") on node \"crc\" DevicePath \"\"" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.666128 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1875a281-b41f-438e-830f-07a6df73e768-kube-api-access-h9kzw" (OuterVolumeSpecName: "kube-api-access-h9kzw") pod "1875a281-b41f-438e-830f-07a6df73e768" (UID: "1875a281-b41f-438e-830f-07a6df73e768"). InnerVolumeSpecName "kube-api-access-h9kzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.669045 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1875a281-b41f-438e-830f-07a6df73e768-scripts" (OuterVolumeSpecName: "scripts") pod "1875a281-b41f-438e-830f-07a6df73e768" (UID: "1875a281-b41f-438e-830f-07a6df73e768"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.688149 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1875a281-b41f-438e-830f-07a6df73e768-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1875a281-b41f-438e-830f-07a6df73e768" (UID: "1875a281-b41f-438e-830f-07a6df73e768"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.726910 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1875a281-b41f-438e-830f-07a6df73e768-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1875a281-b41f-438e-830f-07a6df73e768" (UID: "1875a281-b41f-438e-830f-07a6df73e768"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.749793 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1875a281-b41f-438e-830f-07a6df73e768-config-data" (OuterVolumeSpecName: "config-data") pod "1875a281-b41f-438e-830f-07a6df73e768" (UID: "1875a281-b41f-438e-830f-07a6df73e768"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.765180 4828 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1875a281-b41f-438e-830f-07a6df73e768-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.765211 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1875a281-b41f-438e-830f-07a6df73e768-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.765221 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1875a281-b41f-438e-830f-07a6df73e768-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.765229 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9kzw\" (UniqueName: \"kubernetes.io/projected/1875a281-b41f-438e-830f-07a6df73e768-kube-api-access-h9kzw\") on node \"crc\" DevicePath \"\"" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.765239 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1875a281-b41f-438e-830f-07a6df73e768-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.889415 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.898652 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.913707 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 19:26:26 crc kubenswrapper[4828]: E1205 19:26:26.914072 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1875a281-b41f-438e-830f-07a6df73e768" containerName="ceilometer-notification-agent" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.914088 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1875a281-b41f-438e-830f-07a6df73e768" containerName="ceilometer-notification-agent" Dec 05 19:26:26 crc kubenswrapper[4828]: E1205 19:26:26.914108 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a74937-57cf-442b-a1df-16a9df3b7948" containerName="kube-state-metrics" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.914116 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a74937-57cf-442b-a1df-16a9df3b7948" containerName="kube-state-metrics" Dec 05 19:26:26 crc kubenswrapper[4828]: E1205 19:26:26.914125 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1875a281-b41f-438e-830f-07a6df73e768" containerName="proxy-httpd" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.914133 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1875a281-b41f-438e-830f-07a6df73e768" containerName="proxy-httpd" Dec 05 19:26:26 crc kubenswrapper[4828]: E1205 19:26:26.914144 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1875a281-b41f-438e-830f-07a6df73e768" containerName="sg-core" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.914149 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1875a281-b41f-438e-830f-07a6df73e768" containerName="sg-core" Dec 05 19:26:26 crc kubenswrapper[4828]: E1205 19:26:26.914168 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1875a281-b41f-438e-830f-07a6df73e768" containerName="ceilometer-central-agent" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.914173 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1875a281-b41f-438e-830f-07a6df73e768" containerName="ceilometer-central-agent" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.914341 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="1875a281-b41f-438e-830f-07a6df73e768" containerName="sg-core" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.914358 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="1875a281-b41f-438e-830f-07a6df73e768" containerName="proxy-httpd" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.914372 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="1875a281-b41f-438e-830f-07a6df73e768" containerName="ceilometer-notification-agent" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.914379 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="1875a281-b41f-438e-830f-07a6df73e768" containerName="ceilometer-central-agent" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.914396 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="55a74937-57cf-442b-a1df-16a9df3b7948" containerName="kube-state-metrics" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.914946 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.919947 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.920747 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 05 19:26:26 crc kubenswrapper[4828]: I1205 19:26:26.928239 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.069877 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2de7f1c-8c50-41e1-be30-ce169c261e65-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d2de7f1c-8c50-41e1-be30-ce169c261e65\") " pod="openstack/kube-state-metrics-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.070015 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d2de7f1c-8c50-41e1-be30-ce169c261e65-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d2de7f1c-8c50-41e1-be30-ce169c261e65\") " pod="openstack/kube-state-metrics-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.070289 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2de7f1c-8c50-41e1-be30-ce169c261e65-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d2de7f1c-8c50-41e1-be30-ce169c261e65\") " pod="openstack/kube-state-metrics-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.070521 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg47g\" (UniqueName: \"kubernetes.io/projected/d2de7f1c-8c50-41e1-be30-ce169c261e65-kube-api-access-gg47g\") pod \"kube-state-metrics-0\" (UID: \"d2de7f1c-8c50-41e1-be30-ce169c261e65\") " pod="openstack/kube-state-metrics-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.172262 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2de7f1c-8c50-41e1-be30-ce169c261e65-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d2de7f1c-8c50-41e1-be30-ce169c261e65\") " pod="openstack/kube-state-metrics-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.172621 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d2de7f1c-8c50-41e1-be30-ce169c261e65-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d2de7f1c-8c50-41e1-be30-ce169c261e65\") " pod="openstack/kube-state-metrics-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.172777 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2de7f1c-8c50-41e1-be30-ce169c261e65-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d2de7f1c-8c50-41e1-be30-ce169c261e65\") " pod="openstack/kube-state-metrics-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.173016 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg47g\" (UniqueName: \"kubernetes.io/projected/d2de7f1c-8c50-41e1-be30-ce169c261e65-kube-api-access-gg47g\") pod \"kube-state-metrics-0\" (UID: \"d2de7f1c-8c50-41e1-be30-ce169c261e65\") " pod="openstack/kube-state-metrics-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.177474 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d2de7f1c-8c50-41e1-be30-ce169c261e65-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d2de7f1c-8c50-41e1-be30-ce169c261e65\") " pod="openstack/kube-state-metrics-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.177474 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2de7f1c-8c50-41e1-be30-ce169c261e65-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d2de7f1c-8c50-41e1-be30-ce169c261e65\") " pod="openstack/kube-state-metrics-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.177882 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2de7f1c-8c50-41e1-be30-ce169c261e65-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d2de7f1c-8c50-41e1-be30-ce169c261e65\") " pod="openstack/kube-state-metrics-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.189899 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg47g\" (UniqueName: \"kubernetes.io/projected/d2de7f1c-8c50-41e1-be30-ce169c261e65-kube-api-access-gg47g\") pod \"kube-state-metrics-0\" (UID: \"d2de7f1c-8c50-41e1-be30-ce169c261e65\") " pod="openstack/kube-state-metrics-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.233178 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.567680 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hlzjj" event={"ID":"a67a695c-a0d3-46ac-ad8f-b90c17732e01","Type":"ContainerStarted","Data":"ec4fd56f0e0a70af83e61b94fe268eb59a5952f35ad959bf58195926e27a8efa"} Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.569288 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.593061 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-hlzjj" podStartSLOduration=2.291478942 podStartE2EDuration="9.593040509s" podCreationTimestamp="2025-12-05 19:26:18 +0000 UTC" firstStartedPulling="2025-12-05 19:26:19.012384839 +0000 UTC m=+1356.907607145" lastFinishedPulling="2025-12-05 19:26:26.313946406 +0000 UTC m=+1364.209168712" observedRunningTime="2025-12-05 19:26:27.585036443 +0000 UTC m=+1365.480258759" watchObservedRunningTime="2025-12-05 19:26:27.593040509 +0000 UTC m=+1365.488262815" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.607637 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.615935 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.638686 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.641714 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.646390 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.646722 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.647068 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.647222 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.697367 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 19:26:27 crc kubenswrapper[4828]: W1205 19:26:27.701662 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2de7f1c_8c50_41e1_be30_ce169c261e65.slice/crio-432113ea121c822f1984b461f8bf6c83abb5239189618e0f013cd1c1816ae4d4 WatchSource:0}: Error finding container 432113ea121c822f1984b461f8bf6c83abb5239189618e0f013cd1c1816ae4d4: Status 404 returned error can't find the container with id 432113ea121c822f1984b461f8bf6c83abb5239189618e0f013cd1c1816ae4d4 Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.784187 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/092a7272-3983-4d2e-a1de-7ef49e53c165-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " pod="openstack/ceilometer-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.784246 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092a7272-3983-4d2e-a1de-7ef49e53c165-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " pod="openstack/ceilometer-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.784316 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/092a7272-3983-4d2e-a1de-7ef49e53c165-config-data\") pod \"ceilometer-0\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " pod="openstack/ceilometer-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.784340 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/092a7272-3983-4d2e-a1de-7ef49e53c165-log-httpd\") pod \"ceilometer-0\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " pod="openstack/ceilometer-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.784539 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/092a7272-3983-4d2e-a1de-7ef49e53c165-scripts\") pod \"ceilometer-0\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " pod="openstack/ceilometer-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.784577 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z776j\" (UniqueName: \"kubernetes.io/projected/092a7272-3983-4d2e-a1de-7ef49e53c165-kube-api-access-z776j\") pod \"ceilometer-0\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " pod="openstack/ceilometer-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.784604 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/092a7272-3983-4d2e-a1de-7ef49e53c165-run-httpd\") pod \"ceilometer-0\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " pod="openstack/ceilometer-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.784621 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/092a7272-3983-4d2e-a1de-7ef49e53c165-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " pod="openstack/ceilometer-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.886319 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092a7272-3983-4d2e-a1de-7ef49e53c165-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " pod="openstack/ceilometer-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.886429 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/092a7272-3983-4d2e-a1de-7ef49e53c165-config-data\") pod \"ceilometer-0\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " pod="openstack/ceilometer-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.886461 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/092a7272-3983-4d2e-a1de-7ef49e53c165-log-httpd\") pod \"ceilometer-0\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " pod="openstack/ceilometer-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.886515 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/092a7272-3983-4d2e-a1de-7ef49e53c165-scripts\") pod \"ceilometer-0\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " pod="openstack/ceilometer-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.886535 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z776j\" (UniqueName: \"kubernetes.io/projected/092a7272-3983-4d2e-a1de-7ef49e53c165-kube-api-access-z776j\") pod \"ceilometer-0\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " pod="openstack/ceilometer-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.886557 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/092a7272-3983-4d2e-a1de-7ef49e53c165-run-httpd\") pod \"ceilometer-0\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " pod="openstack/ceilometer-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.886577 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/092a7272-3983-4d2e-a1de-7ef49e53c165-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " pod="openstack/ceilometer-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.886637 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/092a7272-3983-4d2e-a1de-7ef49e53c165-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " pod="openstack/ceilometer-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.887305 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/092a7272-3983-4d2e-a1de-7ef49e53c165-run-httpd\") pod \"ceilometer-0\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " pod="openstack/ceilometer-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.887347 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/092a7272-3983-4d2e-a1de-7ef49e53c165-log-httpd\") pod \"ceilometer-0\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " pod="openstack/ceilometer-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.891424 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/092a7272-3983-4d2e-a1de-7ef49e53c165-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " pod="openstack/ceilometer-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.891591 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/092a7272-3983-4d2e-a1de-7ef49e53c165-scripts\") pod \"ceilometer-0\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " pod="openstack/ceilometer-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.892262 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/092a7272-3983-4d2e-a1de-7ef49e53c165-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " pod="openstack/ceilometer-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.893150 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092a7272-3983-4d2e-a1de-7ef49e53c165-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " pod="openstack/ceilometer-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.893599 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/092a7272-3983-4d2e-a1de-7ef49e53c165-config-data\") pod \"ceilometer-0\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " pod="openstack/ceilometer-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.903156 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z776j\" (UniqueName: \"kubernetes.io/projected/092a7272-3983-4d2e-a1de-7ef49e53c165-kube-api-access-z776j\") pod \"ceilometer-0\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " pod="openstack/ceilometer-0" Dec 05 19:26:27 crc kubenswrapper[4828]: I1205 19:26:27.958718 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 19:26:28 crc kubenswrapper[4828]: I1205 19:26:28.407256 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:26:28 crc kubenswrapper[4828]: I1205 19:26:28.461998 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1875a281-b41f-438e-830f-07a6df73e768" path="/var/lib/kubelet/pods/1875a281-b41f-438e-830f-07a6df73e768/volumes" Dec 05 19:26:28 crc kubenswrapper[4828]: I1205 19:26:28.462679 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55a74937-57cf-442b-a1df-16a9df3b7948" path="/var/lib/kubelet/pods/55a74937-57cf-442b-a1df-16a9df3b7948/volumes" Dec 05 19:26:28 crc kubenswrapper[4828]: I1205 19:26:28.592118 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d2de7f1c-8c50-41e1-be30-ce169c261e65","Type":"ContainerStarted","Data":"eade00e3f733ca3b6cbd5d4eba12c964b8b90bba82e225dfaa888e038cc1c255"} Dec 05 19:26:28 crc kubenswrapper[4828]: I1205 19:26:28.592208 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d2de7f1c-8c50-41e1-be30-ce169c261e65","Type":"ContainerStarted","Data":"432113ea121c822f1984b461f8bf6c83abb5239189618e0f013cd1c1816ae4d4"} Dec 05 19:26:28 crc kubenswrapper[4828]: I1205 19:26:28.601020 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 05 19:26:28 crc kubenswrapper[4828]: I1205 19:26:28.615496 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"092a7272-3983-4d2e-a1de-7ef49e53c165","Type":"ContainerStarted","Data":"1bf1eaafe6f0b112af7aba5cd6f4fda3c80955f93675b6a4c74a51f9f53de022"} Dec 05 19:26:29 crc kubenswrapper[4828]: I1205 19:26:29.627356 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"092a7272-3983-4d2e-a1de-7ef49e53c165","Type":"ContainerStarted","Data":"085bc3eb0c07c1c738dacb7d98d0049633936110afe5b6ed0b26f0b1f0d32e1d"} Dec 05 19:26:30 crc kubenswrapper[4828]: I1205 19:26:30.639892 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"092a7272-3983-4d2e-a1de-7ef49e53c165","Type":"ContainerStarted","Data":"c7b63794b0911ab0c47f0ff4fb06045b02b85b6bc09cbcb52550a459ca6411f2"} Dec 05 19:26:30 crc kubenswrapper[4828]: I1205 19:26:30.640134 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"092a7272-3983-4d2e-a1de-7ef49e53c165","Type":"ContainerStarted","Data":"2098afe60574f01b31be041fde941da08152f3ec1ef4cb9637f83db85e8e80ba"} Dec 05 19:26:32 crc kubenswrapper[4828]: I1205 19:26:32.475813 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=6.054139747 podStartE2EDuration="6.475780221s" podCreationTimestamp="2025-12-05 19:26:26 +0000 UTC" firstStartedPulling="2025-12-05 19:26:27.704357637 +0000 UTC m=+1365.599579943" lastFinishedPulling="2025-12-05 19:26:28.125998121 +0000 UTC m=+1366.021220417" observedRunningTime="2025-12-05 19:26:28.65369307 +0000 UTC m=+1366.548915376" watchObservedRunningTime="2025-12-05 19:26:32.475780221 +0000 UTC m=+1370.371002557" Dec 05 19:26:32 crc kubenswrapper[4828]: I1205 19:26:32.662731 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"092a7272-3983-4d2e-a1de-7ef49e53c165","Type":"ContainerStarted","Data":"8f9297e5d9a74d260d74b306d7d27665a236c2c519f1a4b7c32f9f69969620b4"} Dec 05 19:26:32 crc kubenswrapper[4828]: I1205 19:26:32.662962 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 19:26:32 crc kubenswrapper[4828]: I1205 19:26:32.701685 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.55038753 podStartE2EDuration="5.701662174s" podCreationTimestamp="2025-12-05 19:26:27 +0000 UTC" firstStartedPulling="2025-12-05 19:26:28.42315839 +0000 UTC m=+1366.318380696" lastFinishedPulling="2025-12-05 19:26:31.574433034 +0000 UTC m=+1369.469655340" observedRunningTime="2025-12-05 19:26:32.691435988 +0000 UTC m=+1370.586658314" watchObservedRunningTime="2025-12-05 19:26:32.701662174 +0000 UTC m=+1370.596884490" Dec 05 19:26:35 crc kubenswrapper[4828]: I1205 19:26:35.259436 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:26:35 crc kubenswrapper[4828]: I1205 19:26:35.259810 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:26:35 crc kubenswrapper[4828]: I1205 19:26:35.259904 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" Dec 05 19:26:35 crc kubenswrapper[4828]: I1205 19:26:35.260727 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aab20e62cb85e96facfecb4602cb199c408644c9ab8b87bd02db08dd9a3628e0"} pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 19:26:35 crc kubenswrapper[4828]: I1205 19:26:35.260796 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" containerID="cri-o://aab20e62cb85e96facfecb4602cb199c408644c9ab8b87bd02db08dd9a3628e0" gracePeriod=600 Dec 05 19:26:35 crc kubenswrapper[4828]: I1205 19:26:35.691398 4828 generic.go:334] "Generic (PLEG): container finished" podID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerID="aab20e62cb85e96facfecb4602cb199c408644c9ab8b87bd02db08dd9a3628e0" exitCode=0 Dec 05 19:26:35 crc kubenswrapper[4828]: I1205 19:26:35.691710 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerDied","Data":"aab20e62cb85e96facfecb4602cb199c408644c9ab8b87bd02db08dd9a3628e0"} Dec 05 19:26:35 crc kubenswrapper[4828]: I1205 19:26:35.691742 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerStarted","Data":"44273427ab956efcfe69105cf20e92501e73b81a7dc35341e1cfc9b1dd7be2f7"} Dec 05 19:26:35 crc kubenswrapper[4828]: I1205 19:26:35.691761 4828 scope.go:117] "RemoveContainer" containerID="0cc286b8dceed84d395e55058b3c3160e80eae904633740211fa06dda4862d4f" Dec 05 19:26:37 crc kubenswrapper[4828]: I1205 19:26:37.248880 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 05 19:26:38 crc kubenswrapper[4828]: I1205 19:26:38.732630 4828 generic.go:334] "Generic (PLEG): container finished" podID="a67a695c-a0d3-46ac-ad8f-b90c17732e01" containerID="ec4fd56f0e0a70af83e61b94fe268eb59a5952f35ad959bf58195926e27a8efa" exitCode=0 Dec 05 19:26:38 crc kubenswrapper[4828]: I1205 19:26:38.732697 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hlzjj" event={"ID":"a67a695c-a0d3-46ac-ad8f-b90c17732e01","Type":"ContainerDied","Data":"ec4fd56f0e0a70af83e61b94fe268eb59a5952f35ad959bf58195926e27a8efa"} Dec 05 19:26:40 crc kubenswrapper[4828]: I1205 19:26:40.110075 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hlzjj" Dec 05 19:26:40 crc kubenswrapper[4828]: I1205 19:26:40.226951 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67a695c-a0d3-46ac-ad8f-b90c17732e01-combined-ca-bundle\") pod \"a67a695c-a0d3-46ac-ad8f-b90c17732e01\" (UID: \"a67a695c-a0d3-46ac-ad8f-b90c17732e01\") " Dec 05 19:26:40 crc kubenswrapper[4828]: I1205 19:26:40.226995 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67a695c-a0d3-46ac-ad8f-b90c17732e01-config-data\") pod \"a67a695c-a0d3-46ac-ad8f-b90c17732e01\" (UID: \"a67a695c-a0d3-46ac-ad8f-b90c17732e01\") " Dec 05 19:26:40 crc kubenswrapper[4828]: I1205 19:26:40.227037 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a67a695c-a0d3-46ac-ad8f-b90c17732e01-scripts\") pod \"a67a695c-a0d3-46ac-ad8f-b90c17732e01\" (UID: \"a67a695c-a0d3-46ac-ad8f-b90c17732e01\") " Dec 05 19:26:40 crc kubenswrapper[4828]: I1205 19:26:40.227175 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w58hx\" (UniqueName: \"kubernetes.io/projected/a67a695c-a0d3-46ac-ad8f-b90c17732e01-kube-api-access-w58hx\") pod \"a67a695c-a0d3-46ac-ad8f-b90c17732e01\" (UID: \"a67a695c-a0d3-46ac-ad8f-b90c17732e01\") " Dec 05 19:26:40 crc kubenswrapper[4828]: I1205 19:26:40.233190 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a67a695c-a0d3-46ac-ad8f-b90c17732e01-kube-api-access-w58hx" (OuterVolumeSpecName: "kube-api-access-w58hx") pod "a67a695c-a0d3-46ac-ad8f-b90c17732e01" (UID: "a67a695c-a0d3-46ac-ad8f-b90c17732e01"). InnerVolumeSpecName "kube-api-access-w58hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:26:40 crc kubenswrapper[4828]: I1205 19:26:40.234094 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a67a695c-a0d3-46ac-ad8f-b90c17732e01-scripts" (OuterVolumeSpecName: "scripts") pod "a67a695c-a0d3-46ac-ad8f-b90c17732e01" (UID: "a67a695c-a0d3-46ac-ad8f-b90c17732e01"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:26:40 crc kubenswrapper[4828]: I1205 19:26:40.255959 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a67a695c-a0d3-46ac-ad8f-b90c17732e01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a67a695c-a0d3-46ac-ad8f-b90c17732e01" (UID: "a67a695c-a0d3-46ac-ad8f-b90c17732e01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:26:40 crc kubenswrapper[4828]: I1205 19:26:40.266137 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a67a695c-a0d3-46ac-ad8f-b90c17732e01-config-data" (OuterVolumeSpecName: "config-data") pod "a67a695c-a0d3-46ac-ad8f-b90c17732e01" (UID: "a67a695c-a0d3-46ac-ad8f-b90c17732e01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:26:40 crc kubenswrapper[4828]: I1205 19:26:40.329794 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w58hx\" (UniqueName: \"kubernetes.io/projected/a67a695c-a0d3-46ac-ad8f-b90c17732e01-kube-api-access-w58hx\") on node \"crc\" DevicePath \"\"" Dec 05 19:26:40 crc kubenswrapper[4828]: I1205 19:26:40.329827 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67a695c-a0d3-46ac-ad8f-b90c17732e01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:26:40 crc kubenswrapper[4828]: I1205 19:26:40.329852 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67a695c-a0d3-46ac-ad8f-b90c17732e01-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:26:40 crc kubenswrapper[4828]: I1205 19:26:40.329865 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a67a695c-a0d3-46ac-ad8f-b90c17732e01-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:26:40 crc kubenswrapper[4828]: I1205 19:26:40.757935 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hlzjj" event={"ID":"a67a695c-a0d3-46ac-ad8f-b90c17732e01","Type":"ContainerDied","Data":"f8719692f3e2fc50fee6b1bf35b3c0431228becd0baa3ba599bec6266deea154"} Dec 05 19:26:40 crc kubenswrapper[4828]: I1205 19:26:40.757983 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8719692f3e2fc50fee6b1bf35b3c0431228becd0baa3ba599bec6266deea154" Dec 05 19:26:40 crc kubenswrapper[4828]: I1205 19:26:40.758066 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hlzjj" Dec 05 19:26:40 crc kubenswrapper[4828]: I1205 19:26:40.860159 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 05 19:26:40 crc kubenswrapper[4828]: E1205 19:26:40.871058 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67a695c-a0d3-46ac-ad8f-b90c17732e01" containerName="nova-cell0-conductor-db-sync" Dec 05 19:26:40 crc kubenswrapper[4828]: I1205 19:26:40.871093 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67a695c-a0d3-46ac-ad8f-b90c17732e01" containerName="nova-cell0-conductor-db-sync" Dec 05 19:26:40 crc kubenswrapper[4828]: I1205 19:26:40.873514 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="a67a695c-a0d3-46ac-ad8f-b90c17732e01" containerName="nova-cell0-conductor-db-sync" Dec 05 19:26:40 crc kubenswrapper[4828]: I1205 19:26:40.876352 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 05 19:26:40 crc kubenswrapper[4828]: I1205 19:26:40.885391 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 05 19:26:40 crc kubenswrapper[4828]: I1205 19:26:40.886248 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8qzs4" Dec 05 19:26:40 crc kubenswrapper[4828]: I1205 19:26:40.909545 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 05 19:26:40 crc kubenswrapper[4828]: I1205 19:26:40.941755 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrrtn\" (UniqueName: \"kubernetes.io/projected/6f14d5c3-1d7a-4efc-90f3-e97d2cb4098d-kube-api-access-nrrtn\") pod \"nova-cell0-conductor-0\" (UID: \"6f14d5c3-1d7a-4efc-90f3-e97d2cb4098d\") " pod="openstack/nova-cell0-conductor-0" Dec 05 19:26:40 crc kubenswrapper[4828]: I1205 19:26:40.941801 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f14d5c3-1d7a-4efc-90f3-e97d2cb4098d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"6f14d5c3-1d7a-4efc-90f3-e97d2cb4098d\") " pod="openstack/nova-cell0-conductor-0" Dec 05 19:26:40 crc kubenswrapper[4828]: I1205 19:26:40.941919 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f14d5c3-1d7a-4efc-90f3-e97d2cb4098d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"6f14d5c3-1d7a-4efc-90f3-e97d2cb4098d\") " pod="openstack/nova-cell0-conductor-0" Dec 05 19:26:41 crc kubenswrapper[4828]: I1205 19:26:41.043900 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrrtn\" (UniqueName: \"kubernetes.io/projected/6f14d5c3-1d7a-4efc-90f3-e97d2cb4098d-kube-api-access-nrrtn\") pod \"nova-cell0-conductor-0\" (UID: \"6f14d5c3-1d7a-4efc-90f3-e97d2cb4098d\") " pod="openstack/nova-cell0-conductor-0" Dec 05 19:26:41 crc kubenswrapper[4828]: I1205 19:26:41.043987 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f14d5c3-1d7a-4efc-90f3-e97d2cb4098d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"6f14d5c3-1d7a-4efc-90f3-e97d2cb4098d\") " pod="openstack/nova-cell0-conductor-0" Dec 05 19:26:41 crc kubenswrapper[4828]: I1205 19:26:41.044073 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f14d5c3-1d7a-4efc-90f3-e97d2cb4098d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"6f14d5c3-1d7a-4efc-90f3-e97d2cb4098d\") " pod="openstack/nova-cell0-conductor-0" Dec 05 19:26:41 crc kubenswrapper[4828]: I1205 19:26:41.051958 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f14d5c3-1d7a-4efc-90f3-e97d2cb4098d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"6f14d5c3-1d7a-4efc-90f3-e97d2cb4098d\") " pod="openstack/nova-cell0-conductor-0" Dec 05 19:26:41 crc kubenswrapper[4828]: I1205 19:26:41.053782 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f14d5c3-1d7a-4efc-90f3-e97d2cb4098d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"6f14d5c3-1d7a-4efc-90f3-e97d2cb4098d\") " pod="openstack/nova-cell0-conductor-0" Dec 05 19:26:41 crc kubenswrapper[4828]: I1205 19:26:41.067925 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrrtn\" (UniqueName: \"kubernetes.io/projected/6f14d5c3-1d7a-4efc-90f3-e97d2cb4098d-kube-api-access-nrrtn\") pod \"nova-cell0-conductor-0\" (UID: \"6f14d5c3-1d7a-4efc-90f3-e97d2cb4098d\") " pod="openstack/nova-cell0-conductor-0" Dec 05 19:26:41 crc kubenswrapper[4828]: I1205 19:26:41.209346 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 05 19:26:41 crc kubenswrapper[4828]: I1205 19:26:41.707236 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 05 19:26:41 crc kubenswrapper[4828]: W1205 19:26:41.710547 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f14d5c3_1d7a_4efc_90f3_e97d2cb4098d.slice/crio-e477fee3bf6ffed357297e905a56d055d7b61deb259e6d16d648594115be0748 WatchSource:0}: Error finding container e477fee3bf6ffed357297e905a56d055d7b61deb259e6d16d648594115be0748: Status 404 returned error can't find the container with id e477fee3bf6ffed357297e905a56d055d7b61deb259e6d16d648594115be0748 Dec 05 19:26:41 crc kubenswrapper[4828]: I1205 19:26:41.768928 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"6f14d5c3-1d7a-4efc-90f3-e97d2cb4098d","Type":"ContainerStarted","Data":"e477fee3bf6ffed357297e905a56d055d7b61deb259e6d16d648594115be0748"} Dec 05 19:26:42 crc kubenswrapper[4828]: I1205 19:26:42.788432 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"6f14d5c3-1d7a-4efc-90f3-e97d2cb4098d","Type":"ContainerStarted","Data":"b24629f47ea6fb696a641687369d9f0ac4e8a2c4c5780254cc68f29d337694f0"} Dec 05 19:26:42 crc kubenswrapper[4828]: I1205 19:26:42.793077 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 05 19:26:42 crc kubenswrapper[4828]: I1205 19:26:42.814052 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.813831485 podStartE2EDuration="2.813831485s" podCreationTimestamp="2025-12-05 19:26:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:26:42.811671207 +0000 UTC m=+1380.706893513" watchObservedRunningTime="2025-12-05 19:26:42.813831485 +0000 UTC m=+1380.709053801" Dec 05 19:26:51 crc kubenswrapper[4828]: I1205 19:26:51.244303 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 05 19:26:51 crc kubenswrapper[4828]: I1205 19:26:51.772207 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-km7zx"] Dec 05 19:26:51 crc kubenswrapper[4828]: I1205 19:26:51.773621 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-km7zx" Dec 05 19:26:51 crc kubenswrapper[4828]: I1205 19:26:51.775930 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 05 19:26:51 crc kubenswrapper[4828]: I1205 19:26:51.776415 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 05 19:26:51 crc kubenswrapper[4828]: I1205 19:26:51.794026 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-km7zx"] Dec 05 19:26:51 crc kubenswrapper[4828]: I1205 19:26:51.923077 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0caade90-0fc6-4fa9-9c58-8251b88cb827-scripts\") pod \"nova-cell0-cell-mapping-km7zx\" (UID: \"0caade90-0fc6-4fa9-9c58-8251b88cb827\") " pod="openstack/nova-cell0-cell-mapping-km7zx" Dec 05 19:26:51 crc kubenswrapper[4828]: I1205 19:26:51.923158 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0caade90-0fc6-4fa9-9c58-8251b88cb827-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-km7zx\" (UID: \"0caade90-0fc6-4fa9-9c58-8251b88cb827\") " pod="openstack/nova-cell0-cell-mapping-km7zx" Dec 05 19:26:51 crc kubenswrapper[4828]: I1205 19:26:51.923355 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0caade90-0fc6-4fa9-9c58-8251b88cb827-config-data\") pod \"nova-cell0-cell-mapping-km7zx\" (UID: \"0caade90-0fc6-4fa9-9c58-8251b88cb827\") " pod="openstack/nova-cell0-cell-mapping-km7zx" Dec 05 19:26:51 crc kubenswrapper[4828]: I1205 19:26:51.923413 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqnft\" (UniqueName: \"kubernetes.io/projected/0caade90-0fc6-4fa9-9c58-8251b88cb827-kube-api-access-zqnft\") pod \"nova-cell0-cell-mapping-km7zx\" (UID: \"0caade90-0fc6-4fa9-9c58-8251b88cb827\") " pod="openstack/nova-cell0-cell-mapping-km7zx" Dec 05 19:26:51 crc kubenswrapper[4828]: I1205 19:26:51.962801 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 19:26:51 crc kubenswrapper[4828]: I1205 19:26:51.963902 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 19:26:51 crc kubenswrapper[4828]: I1205 19:26:51.970933 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 05 19:26:51 crc kubenswrapper[4828]: I1205 19:26:51.983193 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.010162 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-47hsp"] Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.020173 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-47hsp" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.025756 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0caade90-0fc6-4fa9-9c58-8251b88cb827-config-data\") pod \"nova-cell0-cell-mapping-km7zx\" (UID: \"0caade90-0fc6-4fa9-9c58-8251b88cb827\") " pod="openstack/nova-cell0-cell-mapping-km7zx" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.025810 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqnft\" (UniqueName: \"kubernetes.io/projected/0caade90-0fc6-4fa9-9c58-8251b88cb827-kube-api-access-zqnft\") pod \"nova-cell0-cell-mapping-km7zx\" (UID: \"0caade90-0fc6-4fa9-9c58-8251b88cb827\") " pod="openstack/nova-cell0-cell-mapping-km7zx" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.025917 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0caade90-0fc6-4fa9-9c58-8251b88cb827-scripts\") pod \"nova-cell0-cell-mapping-km7zx\" (UID: \"0caade90-0fc6-4fa9-9c58-8251b88cb827\") " pod="openstack/nova-cell0-cell-mapping-km7zx" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.025943 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0caade90-0fc6-4fa9-9c58-8251b88cb827-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-km7zx\" (UID: \"0caade90-0fc6-4fa9-9c58-8251b88cb827\") " pod="openstack/nova-cell0-cell-mapping-km7zx" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.060035 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0caade90-0fc6-4fa9-9c58-8251b88cb827-scripts\") pod \"nova-cell0-cell-mapping-km7zx\" (UID: \"0caade90-0fc6-4fa9-9c58-8251b88cb827\") " pod="openstack/nova-cell0-cell-mapping-km7zx" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.060730 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0caade90-0fc6-4fa9-9c58-8251b88cb827-config-data\") pod \"nova-cell0-cell-mapping-km7zx\" (UID: \"0caade90-0fc6-4fa9-9c58-8251b88cb827\") " pod="openstack/nova-cell0-cell-mapping-km7zx" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.062668 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0caade90-0fc6-4fa9-9c58-8251b88cb827-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-km7zx\" (UID: \"0caade90-0fc6-4fa9-9c58-8251b88cb827\") " pod="openstack/nova-cell0-cell-mapping-km7zx" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.125423 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqnft\" (UniqueName: \"kubernetes.io/projected/0caade90-0fc6-4fa9-9c58-8251b88cb827-kube-api-access-zqnft\") pod \"nova-cell0-cell-mapping-km7zx\" (UID: \"0caade90-0fc6-4fa9-9c58-8251b88cb827\") " pod="openstack/nova-cell0-cell-mapping-km7zx" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.128700 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429a780e-1367-4ebd-bfef-dddb23dfbcb0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"429a780e-1367-4ebd-bfef-dddb23dfbcb0\") " pod="openstack/nova-scheduler-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.128777 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb7bb\" (UniqueName: \"kubernetes.io/projected/81989aca-451d-4a70-b683-52f8675c3f12-kube-api-access-fb7bb\") pod \"redhat-operators-47hsp\" (UID: \"81989aca-451d-4a70-b683-52f8675c3f12\") " pod="openshift-marketplace/redhat-operators-47hsp" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.128863 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/429a780e-1367-4ebd-bfef-dddb23dfbcb0-config-data\") pod \"nova-scheduler-0\" (UID: \"429a780e-1367-4ebd-bfef-dddb23dfbcb0\") " pod="openstack/nova-scheduler-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.128890 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81989aca-451d-4a70-b683-52f8675c3f12-utilities\") pod \"redhat-operators-47hsp\" (UID: \"81989aca-451d-4a70-b683-52f8675c3f12\") " pod="openshift-marketplace/redhat-operators-47hsp" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.128921 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnp8m\" (UniqueName: \"kubernetes.io/projected/429a780e-1367-4ebd-bfef-dddb23dfbcb0-kube-api-access-pnp8m\") pod \"nova-scheduler-0\" (UID: \"429a780e-1367-4ebd-bfef-dddb23dfbcb0\") " pod="openstack/nova-scheduler-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.128975 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81989aca-451d-4a70-b683-52f8675c3f12-catalog-content\") pod \"redhat-operators-47hsp\" (UID: \"81989aca-451d-4a70-b683-52f8675c3f12\") " pod="openshift-marketplace/redhat-operators-47hsp" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.177276 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.180026 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.185182 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.205324 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-47hsp"] Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.232069 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/429a780e-1367-4ebd-bfef-dddb23dfbcb0-config-data\") pod \"nova-scheduler-0\" (UID: \"429a780e-1367-4ebd-bfef-dddb23dfbcb0\") " pod="openstack/nova-scheduler-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.232121 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81989aca-451d-4a70-b683-52f8675c3f12-utilities\") pod \"redhat-operators-47hsp\" (UID: \"81989aca-451d-4a70-b683-52f8675c3f12\") " pod="openshift-marketplace/redhat-operators-47hsp" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.232158 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnp8m\" (UniqueName: \"kubernetes.io/projected/429a780e-1367-4ebd-bfef-dddb23dfbcb0-kube-api-access-pnp8m\") pod \"nova-scheduler-0\" (UID: \"429a780e-1367-4ebd-bfef-dddb23dfbcb0\") " pod="openstack/nova-scheduler-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.232212 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81989aca-451d-4a70-b683-52f8675c3f12-catalog-content\") pod \"redhat-operators-47hsp\" (UID: \"81989aca-451d-4a70-b683-52f8675c3f12\") " pod="openshift-marketplace/redhat-operators-47hsp" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.232319 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429a780e-1367-4ebd-bfef-dddb23dfbcb0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"429a780e-1367-4ebd-bfef-dddb23dfbcb0\") " pod="openstack/nova-scheduler-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.232361 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb7bb\" (UniqueName: \"kubernetes.io/projected/81989aca-451d-4a70-b683-52f8675c3f12-kube-api-access-fb7bb\") pod \"redhat-operators-47hsp\" (UID: \"81989aca-451d-4a70-b683-52f8675c3f12\") " pod="openshift-marketplace/redhat-operators-47hsp" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.234185 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81989aca-451d-4a70-b683-52f8675c3f12-utilities\") pod \"redhat-operators-47hsp\" (UID: \"81989aca-451d-4a70-b683-52f8675c3f12\") " pod="openshift-marketplace/redhat-operators-47hsp" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.234417 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81989aca-451d-4a70-b683-52f8675c3f12-catalog-content\") pod \"redhat-operators-47hsp\" (UID: \"81989aca-451d-4a70-b683-52f8675c3f12\") " pod="openshift-marketplace/redhat-operators-47hsp" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.243864 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429a780e-1367-4ebd-bfef-dddb23dfbcb0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"429a780e-1367-4ebd-bfef-dddb23dfbcb0\") " pod="openstack/nova-scheduler-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.249432 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.254413 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/429a780e-1367-4ebd-bfef-dddb23dfbcb0-config-data\") pod \"nova-scheduler-0\" (UID: \"429a780e-1367-4ebd-bfef-dddb23dfbcb0\") " pod="openstack/nova-scheduler-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.260427 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnp8m\" (UniqueName: \"kubernetes.io/projected/429a780e-1367-4ebd-bfef-dddb23dfbcb0-kube-api-access-pnp8m\") pod \"nova-scheduler-0\" (UID: \"429a780e-1367-4ebd-bfef-dddb23dfbcb0\") " pod="openstack/nova-scheduler-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.271451 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb7bb\" (UniqueName: \"kubernetes.io/projected/81989aca-451d-4a70-b683-52f8675c3f12-kube-api-access-fb7bb\") pod \"redhat-operators-47hsp\" (UID: \"81989aca-451d-4a70-b683-52f8675c3f12\") " pod="openshift-marketplace/redhat-operators-47hsp" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.276064 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-d2lkm"] Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.277549 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.288716 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.289399 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-d2lkm"] Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.319530 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.321060 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.336210 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.337643 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fc12a4b-c235-4a30-b616-06b6ccb812a0-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-d2lkm\" (UID: \"0fc12a4b-c235-4a30-b616-06b6ccb812a0\") " pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.337741 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9-logs\") pod \"nova-metadata-0\" (UID: \"bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9\") " pod="openstack/nova-metadata-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.340606 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fc12a4b-c235-4a30-b616-06b6ccb812a0-dns-svc\") pod \"dnsmasq-dns-757b4f8459-d2lkm\" (UID: \"0fc12a4b-c235-4a30-b616-06b6ccb812a0\") " pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.340713 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fc12a4b-c235-4a30-b616-06b6ccb812a0-config\") pod \"dnsmasq-dns-757b4f8459-d2lkm\" (UID: \"0fc12a4b-c235-4a30-b616-06b6ccb812a0\") " pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.340775 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0fc12a4b-c235-4a30-b616-06b6ccb812a0-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-d2lkm\" (UID: \"0fc12a4b-c235-4a30-b616-06b6ccb812a0\") " pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.340932 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx2zt\" (UniqueName: \"kubernetes.io/projected/0fc12a4b-c235-4a30-b616-06b6ccb812a0-kube-api-access-hx2zt\") pod \"dnsmasq-dns-757b4f8459-d2lkm\" (UID: \"0fc12a4b-c235-4a30-b616-06b6ccb812a0\") " pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.341001 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9-config-data\") pod \"nova-metadata-0\" (UID: \"bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9\") " pod="openstack/nova-metadata-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.341035 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdzln\" (UniqueName: \"kubernetes.io/projected/bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9-kube-api-access-wdzln\") pod \"nova-metadata-0\" (UID: \"bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9\") " pod="openstack/nova-metadata-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.341103 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fc12a4b-c235-4a30-b616-06b6ccb812a0-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-d2lkm\" (UID: \"0fc12a4b-c235-4a30-b616-06b6ccb812a0\") " pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.341118 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9\") " pod="openstack/nova-metadata-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.344380 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-47hsp" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.361443 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.415141 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-km7zx" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.446921 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0fc12a4b-c235-4a30-b616-06b6ccb812a0-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-d2lkm\" (UID: \"0fc12a4b-c235-4a30-b616-06b6ccb812a0\") " pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.446976 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e187cd7d-903e-4396-b2b5-f0b87c944956-config-data\") pod \"nova-api-0\" (UID: \"e187cd7d-903e-4396-b2b5-f0b87c944956\") " pod="openstack/nova-api-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.447021 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5bnl\" (UniqueName: \"kubernetes.io/projected/e187cd7d-903e-4396-b2b5-f0b87c944956-kube-api-access-k5bnl\") pod \"nova-api-0\" (UID: \"e187cd7d-903e-4396-b2b5-f0b87c944956\") " pod="openstack/nova-api-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.447057 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx2zt\" (UniqueName: \"kubernetes.io/projected/0fc12a4b-c235-4a30-b616-06b6ccb812a0-kube-api-access-hx2zt\") pod \"dnsmasq-dns-757b4f8459-d2lkm\" (UID: \"0fc12a4b-c235-4a30-b616-06b6ccb812a0\") " pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.447145 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9-config-data\") pod \"nova-metadata-0\" (UID: \"bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9\") " pod="openstack/nova-metadata-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.447196 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdzln\" (UniqueName: \"kubernetes.io/projected/bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9-kube-api-access-wdzln\") pod \"nova-metadata-0\" (UID: \"bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9\") " pod="openstack/nova-metadata-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.447335 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fc12a4b-c235-4a30-b616-06b6ccb812a0-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-d2lkm\" (UID: \"0fc12a4b-c235-4a30-b616-06b6ccb812a0\") " pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.447383 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9\") " pod="openstack/nova-metadata-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.447426 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fc12a4b-c235-4a30-b616-06b6ccb812a0-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-d2lkm\" (UID: \"0fc12a4b-c235-4a30-b616-06b6ccb812a0\") " pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.447491 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e187cd7d-903e-4396-b2b5-f0b87c944956-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e187cd7d-903e-4396-b2b5-f0b87c944956\") " pod="openstack/nova-api-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.447538 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9-logs\") pod \"nova-metadata-0\" (UID: \"bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9\") " pod="openstack/nova-metadata-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.447564 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fc12a4b-c235-4a30-b616-06b6ccb812a0-dns-svc\") pod \"dnsmasq-dns-757b4f8459-d2lkm\" (UID: \"0fc12a4b-c235-4a30-b616-06b6ccb812a0\") " pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.447613 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fc12a4b-c235-4a30-b616-06b6ccb812a0-config\") pod \"dnsmasq-dns-757b4f8459-d2lkm\" (UID: \"0fc12a4b-c235-4a30-b616-06b6ccb812a0\") " pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.447659 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e187cd7d-903e-4396-b2b5-f0b87c944956-logs\") pod \"nova-api-0\" (UID: \"e187cd7d-903e-4396-b2b5-f0b87c944956\") " pod="openstack/nova-api-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.448539 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0fc12a4b-c235-4a30-b616-06b6ccb812a0-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-d2lkm\" (UID: \"0fc12a4b-c235-4a30-b616-06b6ccb812a0\") " pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.457618 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9-logs\") pod \"nova-metadata-0\" (UID: \"bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9\") " pod="openstack/nova-metadata-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.458927 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9\") " pod="openstack/nova-metadata-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.460276 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fc12a4b-c235-4a30-b616-06b6ccb812a0-dns-svc\") pod \"dnsmasq-dns-757b4f8459-d2lkm\" (UID: \"0fc12a4b-c235-4a30-b616-06b6ccb812a0\") " pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.464265 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fc12a4b-c235-4a30-b616-06b6ccb812a0-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-d2lkm\" (UID: \"0fc12a4b-c235-4a30-b616-06b6ccb812a0\") " pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.464315 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fc12a4b-c235-4a30-b616-06b6ccb812a0-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-d2lkm\" (UID: \"0fc12a4b-c235-4a30-b616-06b6ccb812a0\") " pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.467175 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fc12a4b-c235-4a30-b616-06b6ccb812a0-config\") pod \"dnsmasq-dns-757b4f8459-d2lkm\" (UID: \"0fc12a4b-c235-4a30-b616-06b6ccb812a0\") " pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.468142 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdzln\" (UniqueName: \"kubernetes.io/projected/bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9-kube-api-access-wdzln\") pod \"nova-metadata-0\" (UID: \"bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9\") " pod="openstack/nova-metadata-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.469740 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9-config-data\") pod \"nova-metadata-0\" (UID: \"bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9\") " pod="openstack/nova-metadata-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.473378 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx2zt\" (UniqueName: \"kubernetes.io/projected/0fc12a4b-c235-4a30-b616-06b6ccb812a0-kube-api-access-hx2zt\") pod \"dnsmasq-dns-757b4f8459-d2lkm\" (UID: \"0fc12a4b-c235-4a30-b616-06b6ccb812a0\") " pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.482171 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.483571 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.487682 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.488025 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.550107 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e187cd7d-903e-4396-b2b5-f0b87c944956-config-data\") pod \"nova-api-0\" (UID: \"e187cd7d-903e-4396-b2b5-f0b87c944956\") " pod="openstack/nova-api-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.550456 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5bnl\" (UniqueName: \"kubernetes.io/projected/e187cd7d-903e-4396-b2b5-f0b87c944956-kube-api-access-k5bnl\") pod \"nova-api-0\" (UID: \"e187cd7d-903e-4396-b2b5-f0b87c944956\") " pod="openstack/nova-api-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.550544 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/078b2cf8-a9b3-422f-b0e9-2d60586d9062-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"078b2cf8-a9b3-422f-b0e9-2d60586d9062\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.550624 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078b2cf8-a9b3-422f-b0e9-2d60586d9062-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"078b2cf8-a9b3-422f-b0e9-2d60586d9062\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.550657 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7p8q\" (UniqueName: \"kubernetes.io/projected/078b2cf8-a9b3-422f-b0e9-2d60586d9062-kube-api-access-t7p8q\") pod \"nova-cell1-novncproxy-0\" (UID: \"078b2cf8-a9b3-422f-b0e9-2d60586d9062\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.550738 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e187cd7d-903e-4396-b2b5-f0b87c944956-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e187cd7d-903e-4396-b2b5-f0b87c944956\") " pod="openstack/nova-api-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.550852 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e187cd7d-903e-4396-b2b5-f0b87c944956-logs\") pod \"nova-api-0\" (UID: \"e187cd7d-903e-4396-b2b5-f0b87c944956\") " pod="openstack/nova-api-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.551796 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e187cd7d-903e-4396-b2b5-f0b87c944956-logs\") pod \"nova-api-0\" (UID: \"e187cd7d-903e-4396-b2b5-f0b87c944956\") " pod="openstack/nova-api-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.573377 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e187cd7d-903e-4396-b2b5-f0b87c944956-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e187cd7d-903e-4396-b2b5-f0b87c944956\") " pod="openstack/nova-api-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.576498 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e187cd7d-903e-4396-b2b5-f0b87c944956-config-data\") pod \"nova-api-0\" (UID: \"e187cd7d-903e-4396-b2b5-f0b87c944956\") " pod="openstack/nova-api-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.583905 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5bnl\" (UniqueName: \"kubernetes.io/projected/e187cd7d-903e-4396-b2b5-f0b87c944956-kube-api-access-k5bnl\") pod \"nova-api-0\" (UID: \"e187cd7d-903e-4396-b2b5-f0b87c944956\") " pod="openstack/nova-api-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.653701 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/078b2cf8-a9b3-422f-b0e9-2d60586d9062-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"078b2cf8-a9b3-422f-b0e9-2d60586d9062\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.653902 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078b2cf8-a9b3-422f-b0e9-2d60586d9062-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"078b2cf8-a9b3-422f-b0e9-2d60586d9062\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.653975 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7p8q\" (UniqueName: \"kubernetes.io/projected/078b2cf8-a9b3-422f-b0e9-2d60586d9062-kube-api-access-t7p8q\") pod \"nova-cell1-novncproxy-0\" (UID: \"078b2cf8-a9b3-422f-b0e9-2d60586d9062\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.660336 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078b2cf8-a9b3-422f-b0e9-2d60586d9062-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"078b2cf8-a9b3-422f-b0e9-2d60586d9062\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.669359 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/078b2cf8-a9b3-422f-b0e9-2d60586d9062-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"078b2cf8-a9b3-422f-b0e9-2d60586d9062\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.675097 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7p8q\" (UniqueName: \"kubernetes.io/projected/078b2cf8-a9b3-422f-b0e9-2d60586d9062-kube-api-access-t7p8q\") pod \"nova-cell1-novncproxy-0\" (UID: \"078b2cf8-a9b3-422f-b0e9-2d60586d9062\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.682864 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.741011 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.761542 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.816287 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:26:52 crc kubenswrapper[4828]: I1205 19:26:52.992692 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.121687 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jzjgf"] Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.124731 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jzjgf" Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.127792 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.128189 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.154419 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-km7zx"] Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.166368 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f832ed19-2e34-439e-bb52-37b2919b810e-scripts\") pod \"nova-cell1-conductor-db-sync-jzjgf\" (UID: \"f832ed19-2e34-439e-bb52-37b2919b810e\") " pod="openstack/nova-cell1-conductor-db-sync-jzjgf" Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.166540 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djqjk\" (UniqueName: \"kubernetes.io/projected/f832ed19-2e34-439e-bb52-37b2919b810e-kube-api-access-djqjk\") pod \"nova-cell1-conductor-db-sync-jzjgf\" (UID: \"f832ed19-2e34-439e-bb52-37b2919b810e\") " pod="openstack/nova-cell1-conductor-db-sync-jzjgf" Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.166610 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f832ed19-2e34-439e-bb52-37b2919b810e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jzjgf\" (UID: \"f832ed19-2e34-439e-bb52-37b2919b810e\") " pod="openstack/nova-cell1-conductor-db-sync-jzjgf" Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.166675 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f832ed19-2e34-439e-bb52-37b2919b810e-config-data\") pod \"nova-cell1-conductor-db-sync-jzjgf\" (UID: \"f832ed19-2e34-439e-bb52-37b2919b810e\") " pod="openstack/nova-cell1-conductor-db-sync-jzjgf" Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.170668 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jzjgf"] Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.182298 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-47hsp"] Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.197046 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.268838 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f832ed19-2e34-439e-bb52-37b2919b810e-scripts\") pod \"nova-cell1-conductor-db-sync-jzjgf\" (UID: \"f832ed19-2e34-439e-bb52-37b2919b810e\") " pod="openstack/nova-cell1-conductor-db-sync-jzjgf" Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.269222 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djqjk\" (UniqueName: \"kubernetes.io/projected/f832ed19-2e34-439e-bb52-37b2919b810e-kube-api-access-djqjk\") pod \"nova-cell1-conductor-db-sync-jzjgf\" (UID: \"f832ed19-2e34-439e-bb52-37b2919b810e\") " pod="openstack/nova-cell1-conductor-db-sync-jzjgf" Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.269253 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f832ed19-2e34-439e-bb52-37b2919b810e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jzjgf\" (UID: \"f832ed19-2e34-439e-bb52-37b2919b810e\") " pod="openstack/nova-cell1-conductor-db-sync-jzjgf" Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.269296 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f832ed19-2e34-439e-bb52-37b2919b810e-config-data\") pod \"nova-cell1-conductor-db-sync-jzjgf\" (UID: \"f832ed19-2e34-439e-bb52-37b2919b810e\") " pod="openstack/nova-cell1-conductor-db-sync-jzjgf" Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.275068 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f832ed19-2e34-439e-bb52-37b2919b810e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jzjgf\" (UID: \"f832ed19-2e34-439e-bb52-37b2919b810e\") " pod="openstack/nova-cell1-conductor-db-sync-jzjgf" Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.275495 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f832ed19-2e34-439e-bb52-37b2919b810e-scripts\") pod \"nova-cell1-conductor-db-sync-jzjgf\" (UID: \"f832ed19-2e34-439e-bb52-37b2919b810e\") " pod="openstack/nova-cell1-conductor-db-sync-jzjgf" Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.286930 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f832ed19-2e34-439e-bb52-37b2919b810e-config-data\") pod \"nova-cell1-conductor-db-sync-jzjgf\" (UID: \"f832ed19-2e34-439e-bb52-37b2919b810e\") " pod="openstack/nova-cell1-conductor-db-sync-jzjgf" Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.318496 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djqjk\" (UniqueName: \"kubernetes.io/projected/f832ed19-2e34-439e-bb52-37b2919b810e-kube-api-access-djqjk\") pod \"nova-cell1-conductor-db-sync-jzjgf\" (UID: \"f832ed19-2e34-439e-bb52-37b2919b810e\") " pod="openstack/nova-cell1-conductor-db-sync-jzjgf" Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.464427 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jzjgf" Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.498939 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-d2lkm"] Dec 05 19:26:53 crc kubenswrapper[4828]: W1205 19:26:53.516834 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fc12a4b_c235_4a30_b616_06b6ccb812a0.slice/crio-7c833188e77bf40fbdc83fc4d10dffce54f175de167f2ded7b9c504713d402d7 WatchSource:0}: Error finding container 7c833188e77bf40fbdc83fc4d10dffce54f175de167f2ded7b9c504713d402d7: Status 404 returned error can't find the container with id 7c833188e77bf40fbdc83fc4d10dffce54f175de167f2ded7b9c504713d402d7 Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.598417 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 19:26:53 crc kubenswrapper[4828]: W1205 19:26:53.649911 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode187cd7d_903e_4396_b2b5_f0b87c944956.slice/crio-f7f90f23b99c7fe5f88787ee603f0f48b770e0682b6821b3e16834fdf1ec1179 WatchSource:0}: Error finding container f7f90f23b99c7fe5f88787ee603f0f48b770e0682b6821b3e16834fdf1ec1179: Status 404 returned error can't find the container with id f7f90f23b99c7fe5f88787ee603f0f48b770e0682b6821b3e16834fdf1ec1179 Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.705116 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.928665 4828 generic.go:334] "Generic (PLEG): container finished" podID="81989aca-451d-4a70-b683-52f8675c3f12" containerID="6f337fd65daa11bea0549196b6aaddfa8a05795d2385239d56db2928d673f3bd" exitCode=0 Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.929041 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47hsp" event={"ID":"81989aca-451d-4a70-b683-52f8675c3f12","Type":"ContainerDied","Data":"6f337fd65daa11bea0549196b6aaddfa8a05795d2385239d56db2928d673f3bd"} Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.929071 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47hsp" event={"ID":"81989aca-451d-4a70-b683-52f8675c3f12","Type":"ContainerStarted","Data":"9b81541b062e2b95eaf0ba37bbd3381dc47463fc8dc39f6ff14dd4c8e1b2ff07"} Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.951043 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9","Type":"ContainerStarted","Data":"43bd7a0358e53baa05a96eafa8a548a6a8e2c34db607d9cefbbe2d43d465b197"} Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.964641 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e187cd7d-903e-4396-b2b5-f0b87c944956","Type":"ContainerStarted","Data":"f7f90f23b99c7fe5f88787ee603f0f48b770e0682b6821b3e16834fdf1ec1179"} Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.974766 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"429a780e-1367-4ebd-bfef-dddb23dfbcb0","Type":"ContainerStarted","Data":"11ba76cdd5191007dfaf07d033386d09fdb0c1303c6fd0f42a13296488dcf059"} Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.981349 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-km7zx" event={"ID":"0caade90-0fc6-4fa9-9c58-8251b88cb827","Type":"ContainerStarted","Data":"7da84ded894cbe24feecc8768a6b4dde1081df6c215a298f12a6bcc8451f487e"} Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.981393 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-km7zx" event={"ID":"0caade90-0fc6-4fa9-9c58-8251b88cb827","Type":"ContainerStarted","Data":"3758c5768ef6fbb6589eb8d6ab5ea7e789971620dd7ae9e536abff5df4572c88"} Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.983093 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"078b2cf8-a9b3-422f-b0e9-2d60586d9062","Type":"ContainerStarted","Data":"acc350bc109ef3dc30840c60710ab7d7d95658dbf0b830383b50ccbfdbb58414"} Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.990336 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" event={"ID":"0fc12a4b-c235-4a30-b616-06b6ccb812a0","Type":"ContainerStarted","Data":"743cbd5922013d2fabdd12757cdfae6cd701e050c51ef05882687e44a6da8c1c"} Dec 05 19:26:53 crc kubenswrapper[4828]: I1205 19:26:53.990396 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" event={"ID":"0fc12a4b-c235-4a30-b616-06b6ccb812a0","Type":"ContainerStarted","Data":"7c833188e77bf40fbdc83fc4d10dffce54f175de167f2ded7b9c504713d402d7"} Dec 05 19:26:54 crc kubenswrapper[4828]: I1205 19:26:54.016122 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-km7zx" podStartSLOduration=3.016103103 podStartE2EDuration="3.016103103s" podCreationTimestamp="2025-12-05 19:26:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:26:54.005927559 +0000 UTC m=+1391.901149865" watchObservedRunningTime="2025-12-05 19:26:54.016103103 +0000 UTC m=+1391.911325399" Dec 05 19:26:54 crc kubenswrapper[4828]: I1205 19:26:54.032724 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jzjgf"] Dec 05 19:26:55 crc kubenswrapper[4828]: I1205 19:26:55.003394 4828 generic.go:334] "Generic (PLEG): container finished" podID="0fc12a4b-c235-4a30-b616-06b6ccb812a0" containerID="743cbd5922013d2fabdd12757cdfae6cd701e050c51ef05882687e44a6da8c1c" exitCode=0 Dec 05 19:26:55 crc kubenswrapper[4828]: I1205 19:26:55.003457 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" event={"ID":"0fc12a4b-c235-4a30-b616-06b6ccb812a0","Type":"ContainerDied","Data":"743cbd5922013d2fabdd12757cdfae6cd701e050c51ef05882687e44a6da8c1c"} Dec 05 19:26:55 crc kubenswrapper[4828]: I1205 19:26:55.004133 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" Dec 05 19:26:55 crc kubenswrapper[4828]: I1205 19:26:55.004150 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" event={"ID":"0fc12a4b-c235-4a30-b616-06b6ccb812a0","Type":"ContainerStarted","Data":"afc1891d2c3a3b9b3b050456cb66b4f37ea89998ae0b50557706423a56fc7b16"} Dec 05 19:26:55 crc kubenswrapper[4828]: I1205 19:26:55.008163 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jzjgf" event={"ID":"f832ed19-2e34-439e-bb52-37b2919b810e","Type":"ContainerStarted","Data":"86f70020404684ce0a0e17ce7204128d13b3cb7c22ace5acbdf319b82d377102"} Dec 05 19:26:55 crc kubenswrapper[4828]: I1205 19:26:55.008197 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jzjgf" event={"ID":"f832ed19-2e34-439e-bb52-37b2919b810e","Type":"ContainerStarted","Data":"eef6bc961d151f3a06bc5ecaf09492d5b998e699103377f54e57f2222220a521"} Dec 05 19:26:55 crc kubenswrapper[4828]: I1205 19:26:55.035763 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" podStartSLOduration=3.034610186 podStartE2EDuration="3.034610186s" podCreationTimestamp="2025-12-05 19:26:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:26:55.033410603 +0000 UTC m=+1392.928632909" watchObservedRunningTime="2025-12-05 19:26:55.034610186 +0000 UTC m=+1392.929832502" Dec 05 19:26:55 crc kubenswrapper[4828]: I1205 19:26:55.049588 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-jzjgf" podStartSLOduration=2.049568389 podStartE2EDuration="2.049568389s" podCreationTimestamp="2025-12-05 19:26:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:26:55.046278421 +0000 UTC m=+1392.941500727" watchObservedRunningTime="2025-12-05 19:26:55.049568389 +0000 UTC m=+1392.944790695" Dec 05 19:26:56 crc kubenswrapper[4828]: I1205 19:26:56.019506 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47hsp" event={"ID":"81989aca-451d-4a70-b683-52f8675c3f12","Type":"ContainerStarted","Data":"70210a304a57aeaf56fff5eb45d7d190ce35c6a6807e910468d9b5fb664d6ac7"} Dec 05 19:26:56 crc kubenswrapper[4828]: I1205 19:26:56.291157 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 19:26:56 crc kubenswrapper[4828]: I1205 19:26:56.323523 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 19:26:57 crc kubenswrapper[4828]: I1205 19:26:57.029108 4828 generic.go:334] "Generic (PLEG): container finished" podID="81989aca-451d-4a70-b683-52f8675c3f12" containerID="70210a304a57aeaf56fff5eb45d7d190ce35c6a6807e910468d9b5fb664d6ac7" exitCode=0 Dec 05 19:26:57 crc kubenswrapper[4828]: I1205 19:26:57.029150 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47hsp" event={"ID":"81989aca-451d-4a70-b683-52f8675c3f12","Type":"ContainerDied","Data":"70210a304a57aeaf56fff5eb45d7d190ce35c6a6807e910468d9b5fb664d6ac7"} Dec 05 19:26:58 crc kubenswrapper[4828]: I1205 19:26:58.192144 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 05 19:27:02 crc kubenswrapper[4828]: I1205 19:27:02.743106 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" Dec 05 19:27:02 crc kubenswrapper[4828]: I1205 19:27:02.802688 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-krcgx"] Dec 05 19:27:02 crc kubenswrapper[4828]: I1205 19:27:02.802930 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" podUID="9c825b05-679b-4869-846d-ba11b6cdda19" containerName="dnsmasq-dns" containerID="cri-o://3a223c5d417b42dc622810b8d6b0f0ec6b1af5d0265fc9aa843fa9c5c8e2c8d4" gracePeriod=10 Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.112687 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e187cd7d-903e-4396-b2b5-f0b87c944956","Type":"ContainerStarted","Data":"159ecf844d67385b973ad8d60b1a79b8154e87f0117ba92d2c52a5ddb2449479"} Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.113053 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e187cd7d-903e-4396-b2b5-f0b87c944956","Type":"ContainerStarted","Data":"7b4799931ff74c993d60f731a84bcddd94085f0c50fc8054a1e280137b9f2cce"} Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.118660 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"429a780e-1367-4ebd-bfef-dddb23dfbcb0","Type":"ContainerStarted","Data":"9de7f94e6abf7e388ffc0f6bc986d6b034f4bf2ed0bbd611a1cf18d92ee7341c"} Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.124694 4828 generic.go:334] "Generic (PLEG): container finished" podID="0caade90-0fc6-4fa9-9c58-8251b88cb827" containerID="7da84ded894cbe24feecc8768a6b4dde1081df6c215a298f12a6bcc8451f487e" exitCode=0 Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.124849 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-km7zx" event={"ID":"0caade90-0fc6-4fa9-9c58-8251b88cb827","Type":"ContainerDied","Data":"7da84ded894cbe24feecc8768a6b4dde1081df6c215a298f12a6bcc8451f487e"} Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.133246 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"078b2cf8-a9b3-422f-b0e9-2d60586d9062","Type":"ContainerStarted","Data":"35e579cb56f91cd1eff276f8271f0f95860e777fbd8d75f885d1097a5f71078b"} Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.133380 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="078b2cf8-a9b3-422f-b0e9-2d60586d9062" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://35e579cb56f91cd1eff276f8271f0f95860e777fbd8d75f885d1097a5f71078b" gracePeriod=30 Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.144433 4828 generic.go:334] "Generic (PLEG): container finished" podID="9c825b05-679b-4869-846d-ba11b6cdda19" containerID="3a223c5d417b42dc622810b8d6b0f0ec6b1af5d0265fc9aa843fa9c5c8e2c8d4" exitCode=0 Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.144509 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" event={"ID":"9c825b05-679b-4869-846d-ba11b6cdda19","Type":"ContainerDied","Data":"3a223c5d417b42dc622810b8d6b0f0ec6b1af5d0265fc9aa843fa9c5c8e2c8d4"} Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.147316 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47hsp" event={"ID":"81989aca-451d-4a70-b683-52f8675c3f12","Type":"ContainerStarted","Data":"4263a313afd2d77ff64ed672d35424aa179a2fb0fa475e0543fb5c325320fbc8"} Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.150762 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9","Type":"ContainerStarted","Data":"2a127a574579da3b70ffb32d0d8df728ccbb4310c7f7efd3f80e9bef756b4988"} Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.150787 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9","Type":"ContainerStarted","Data":"7191d2faa8842741d19dc12fa3b28f40eb828908f11ede21e64d6d81e79bd74d"} Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.150877 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9" containerName="nova-metadata-log" containerID="cri-o://7191d2faa8842741d19dc12fa3b28f40eb828908f11ede21e64d6d81e79bd74d" gracePeriod=30 Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.151346 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9" containerName="nova-metadata-metadata" containerID="cri-o://2a127a574579da3b70ffb32d0d8df728ccbb4310c7f7efd3f80e9bef756b4988" gracePeriod=30 Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.153953 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.054499561 podStartE2EDuration="11.153883952s" podCreationTimestamp="2025-12-05 19:26:52 +0000 UTC" firstStartedPulling="2025-12-05 19:26:53.654934184 +0000 UTC m=+1391.550156490" lastFinishedPulling="2025-12-05 19:27:01.754318575 +0000 UTC m=+1399.649540881" observedRunningTime="2025-12-05 19:27:03.137617923 +0000 UTC m=+1401.032840229" watchObservedRunningTime="2025-12-05 19:27:03.153883952 +0000 UTC m=+1401.049106268" Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.169728 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.469941156 podStartE2EDuration="12.1697063s" podCreationTimestamp="2025-12-05 19:26:51 +0000 UTC" firstStartedPulling="2025-12-05 19:26:53.049433692 +0000 UTC m=+1390.944655998" lastFinishedPulling="2025-12-05 19:27:01.749198836 +0000 UTC m=+1399.644421142" observedRunningTime="2025-12-05 19:27:03.157529201 +0000 UTC m=+1401.052751507" watchObservedRunningTime="2025-12-05 19:27:03.1697063 +0000 UTC m=+1401.064928606" Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.178808 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.167950067 podStartE2EDuration="11.178776515s" podCreationTimestamp="2025-12-05 19:26:52 +0000 UTC" firstStartedPulling="2025-12-05 19:26:53.743393124 +0000 UTC m=+1391.638615430" lastFinishedPulling="2025-12-05 19:27:01.754219572 +0000 UTC m=+1399.649441878" observedRunningTime="2025-12-05 19:27:03.174663764 +0000 UTC m=+1401.069886070" watchObservedRunningTime="2025-12-05 19:27:03.178776515 +0000 UTC m=+1401.073998821" Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.218329 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.698939213 podStartE2EDuration="12.218308373s" podCreationTimestamp="2025-12-05 19:26:51 +0000 UTC" firstStartedPulling="2025-12-05 19:26:53.23550312 +0000 UTC m=+1391.130725426" lastFinishedPulling="2025-12-05 19:27:01.75487228 +0000 UTC m=+1399.650094586" observedRunningTime="2025-12-05 19:27:03.214252314 +0000 UTC m=+1401.109474620" watchObservedRunningTime="2025-12-05 19:27:03.218308373 +0000 UTC m=+1401.113530679" Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.257004 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-47hsp" podStartSLOduration=4.436221018 podStartE2EDuration="12.256981079s" podCreationTimestamp="2025-12-05 19:26:51 +0000 UTC" firstStartedPulling="2025-12-05 19:26:53.93417926 +0000 UTC m=+1391.829401566" lastFinishedPulling="2025-12-05 19:27:01.754939321 +0000 UTC m=+1399.650161627" observedRunningTime="2025-12-05 19:27:03.232071445 +0000 UTC m=+1401.127293751" watchObservedRunningTime="2025-12-05 19:27:03.256981079 +0000 UTC m=+1401.152203385" Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.446371 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.539332 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c825b05-679b-4869-846d-ba11b6cdda19-dns-svc\") pod \"9c825b05-679b-4869-846d-ba11b6cdda19\" (UID: \"9c825b05-679b-4869-846d-ba11b6cdda19\") " Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.539400 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c825b05-679b-4869-846d-ba11b6cdda19-dns-swift-storage-0\") pod \"9c825b05-679b-4869-846d-ba11b6cdda19\" (UID: \"9c825b05-679b-4869-846d-ba11b6cdda19\") " Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.539435 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c825b05-679b-4869-846d-ba11b6cdda19-config\") pod \"9c825b05-679b-4869-846d-ba11b6cdda19\" (UID: \"9c825b05-679b-4869-846d-ba11b6cdda19\") " Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.539455 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdq7j\" (UniqueName: \"kubernetes.io/projected/9c825b05-679b-4869-846d-ba11b6cdda19-kube-api-access-zdq7j\") pod \"9c825b05-679b-4869-846d-ba11b6cdda19\" (UID: \"9c825b05-679b-4869-846d-ba11b6cdda19\") " Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.539527 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c825b05-679b-4869-846d-ba11b6cdda19-ovsdbserver-sb\") pod \"9c825b05-679b-4869-846d-ba11b6cdda19\" (UID: \"9c825b05-679b-4869-846d-ba11b6cdda19\") " Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.539571 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c825b05-679b-4869-846d-ba11b6cdda19-ovsdbserver-nb\") pod \"9c825b05-679b-4869-846d-ba11b6cdda19\" (UID: \"9c825b05-679b-4869-846d-ba11b6cdda19\") " Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.545107 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c825b05-679b-4869-846d-ba11b6cdda19-kube-api-access-zdq7j" (OuterVolumeSpecName: "kube-api-access-zdq7j") pod "9c825b05-679b-4869-846d-ba11b6cdda19" (UID: "9c825b05-679b-4869-846d-ba11b6cdda19"). InnerVolumeSpecName "kube-api-access-zdq7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.589922 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c825b05-679b-4869-846d-ba11b6cdda19-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9c825b05-679b-4869-846d-ba11b6cdda19" (UID: "9c825b05-679b-4869-846d-ba11b6cdda19"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.593379 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c825b05-679b-4869-846d-ba11b6cdda19-config" (OuterVolumeSpecName: "config") pod "9c825b05-679b-4869-846d-ba11b6cdda19" (UID: "9c825b05-679b-4869-846d-ba11b6cdda19"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.601401 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c825b05-679b-4869-846d-ba11b6cdda19-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9c825b05-679b-4869-846d-ba11b6cdda19" (UID: "9c825b05-679b-4869-846d-ba11b6cdda19"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.609861 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c825b05-679b-4869-846d-ba11b6cdda19-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9c825b05-679b-4869-846d-ba11b6cdda19" (UID: "9c825b05-679b-4869-846d-ba11b6cdda19"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.622306 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c825b05-679b-4869-846d-ba11b6cdda19-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9c825b05-679b-4869-846d-ba11b6cdda19" (UID: "9c825b05-679b-4869-846d-ba11b6cdda19"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.642463 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c825b05-679b-4869-846d-ba11b6cdda19-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.642494 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c825b05-679b-4869-846d-ba11b6cdda19-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.642504 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c825b05-679b-4869-846d-ba11b6cdda19-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.642517 4828 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c825b05-679b-4869-846d-ba11b6cdda19-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.642526 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c825b05-679b-4869-846d-ba11b6cdda19-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:03 crc kubenswrapper[4828]: I1205 19:27:03.642537 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdq7j\" (UniqueName: \"kubernetes.io/projected/9c825b05-679b-4869-846d-ba11b6cdda19-kube-api-access-zdq7j\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.045791 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.150482 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9-logs\") pod \"bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9\" (UID: \"bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9\") " Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.150581 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9-combined-ca-bundle\") pod \"bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9\" (UID: \"bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9\") " Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.150762 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdzln\" (UniqueName: \"kubernetes.io/projected/bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9-kube-api-access-wdzln\") pod \"bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9\" (UID: \"bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9\") " Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.150973 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9-logs" (OuterVolumeSpecName: "logs") pod "bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9" (UID: "bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.151466 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9-config-data\") pod \"bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9\" (UID: \"bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9\") " Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.152037 4828 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9-logs\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.161339 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" event={"ID":"9c825b05-679b-4869-846d-ba11b6cdda19","Type":"ContainerDied","Data":"44484adf8e9af3fdc4e890e5599750f59f5c11949c3938c1d2e5c63805eab150"} Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.161420 4828 scope.go:117] "RemoveContainer" containerID="3a223c5d417b42dc622810b8d6b0f0ec6b1af5d0265fc9aa843fa9c5c8e2c8d4" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.161362 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-krcgx" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.164503 4828 generic.go:334] "Generic (PLEG): container finished" podID="bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9" containerID="2a127a574579da3b70ffb32d0d8df728ccbb4310c7f7efd3f80e9bef756b4988" exitCode=0 Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.164531 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.164538 4828 generic.go:334] "Generic (PLEG): container finished" podID="bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9" containerID="7191d2faa8842741d19dc12fa3b28f40eb828908f11ede21e64d6d81e79bd74d" exitCode=143 Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.164534 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9","Type":"ContainerDied","Data":"2a127a574579da3b70ffb32d0d8df728ccbb4310c7f7efd3f80e9bef756b4988"} Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.164612 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9","Type":"ContainerDied","Data":"7191d2faa8842741d19dc12fa3b28f40eb828908f11ede21e64d6d81e79bd74d"} Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.164649 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9","Type":"ContainerDied","Data":"43bd7a0358e53baa05a96eafa8a548a6a8e2c34db607d9cefbbe2d43d465b197"} Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.167963 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9-kube-api-access-wdzln" (OuterVolumeSpecName: "kube-api-access-wdzln") pod "bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9" (UID: "bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9"). InnerVolumeSpecName "kube-api-access-wdzln". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.179022 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9" (UID: "bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.189280 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9-config-data" (OuterVolumeSpecName: "config-data") pod "bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9" (UID: "bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.264306 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.264331 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.264341 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdzln\" (UniqueName: \"kubernetes.io/projected/bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9-kube-api-access-wdzln\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.266119 4828 scope.go:117] "RemoveContainer" containerID="66e3306a5c0fa739135c1249813ea7469570227f9e3befa3b444f3b88ea6cadb" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.291427 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-krcgx"] Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.297262 4828 scope.go:117] "RemoveContainer" containerID="2a127a574579da3b70ffb32d0d8df728ccbb4310c7f7efd3f80e9bef756b4988" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.299591 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-krcgx"] Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.320722 4828 scope.go:117] "RemoveContainer" containerID="7191d2faa8842741d19dc12fa3b28f40eb828908f11ede21e64d6d81e79bd74d" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.347009 4828 scope.go:117] "RemoveContainer" containerID="2a127a574579da3b70ffb32d0d8df728ccbb4310c7f7efd3f80e9bef756b4988" Dec 05 19:27:04 crc kubenswrapper[4828]: E1205 19:27:04.347451 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a127a574579da3b70ffb32d0d8df728ccbb4310c7f7efd3f80e9bef756b4988\": container with ID starting with 2a127a574579da3b70ffb32d0d8df728ccbb4310c7f7efd3f80e9bef756b4988 not found: ID does not exist" containerID="2a127a574579da3b70ffb32d0d8df728ccbb4310c7f7efd3f80e9bef756b4988" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.347481 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a127a574579da3b70ffb32d0d8df728ccbb4310c7f7efd3f80e9bef756b4988"} err="failed to get container status \"2a127a574579da3b70ffb32d0d8df728ccbb4310c7f7efd3f80e9bef756b4988\": rpc error: code = NotFound desc = could not find container \"2a127a574579da3b70ffb32d0d8df728ccbb4310c7f7efd3f80e9bef756b4988\": container with ID starting with 2a127a574579da3b70ffb32d0d8df728ccbb4310c7f7efd3f80e9bef756b4988 not found: ID does not exist" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.347509 4828 scope.go:117] "RemoveContainer" containerID="7191d2faa8842741d19dc12fa3b28f40eb828908f11ede21e64d6d81e79bd74d" Dec 05 19:27:04 crc kubenswrapper[4828]: E1205 19:27:04.347709 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7191d2faa8842741d19dc12fa3b28f40eb828908f11ede21e64d6d81e79bd74d\": container with ID starting with 7191d2faa8842741d19dc12fa3b28f40eb828908f11ede21e64d6d81e79bd74d not found: ID does not exist" containerID="7191d2faa8842741d19dc12fa3b28f40eb828908f11ede21e64d6d81e79bd74d" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.347725 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7191d2faa8842741d19dc12fa3b28f40eb828908f11ede21e64d6d81e79bd74d"} err="failed to get container status \"7191d2faa8842741d19dc12fa3b28f40eb828908f11ede21e64d6d81e79bd74d\": rpc error: code = NotFound desc = could not find container \"7191d2faa8842741d19dc12fa3b28f40eb828908f11ede21e64d6d81e79bd74d\": container with ID starting with 7191d2faa8842741d19dc12fa3b28f40eb828908f11ede21e64d6d81e79bd74d not found: ID does not exist" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.347738 4828 scope.go:117] "RemoveContainer" containerID="2a127a574579da3b70ffb32d0d8df728ccbb4310c7f7efd3f80e9bef756b4988" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.347953 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a127a574579da3b70ffb32d0d8df728ccbb4310c7f7efd3f80e9bef756b4988"} err="failed to get container status \"2a127a574579da3b70ffb32d0d8df728ccbb4310c7f7efd3f80e9bef756b4988\": rpc error: code = NotFound desc = could not find container \"2a127a574579da3b70ffb32d0d8df728ccbb4310c7f7efd3f80e9bef756b4988\": container with ID starting with 2a127a574579da3b70ffb32d0d8df728ccbb4310c7f7efd3f80e9bef756b4988 not found: ID does not exist" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.347965 4828 scope.go:117] "RemoveContainer" containerID="7191d2faa8842741d19dc12fa3b28f40eb828908f11ede21e64d6d81e79bd74d" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.348131 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7191d2faa8842741d19dc12fa3b28f40eb828908f11ede21e64d6d81e79bd74d"} err="failed to get container status \"7191d2faa8842741d19dc12fa3b28f40eb828908f11ede21e64d6d81e79bd74d\": rpc error: code = NotFound desc = could not find container \"7191d2faa8842741d19dc12fa3b28f40eb828908f11ede21e64d6d81e79bd74d\": container with ID starting with 7191d2faa8842741d19dc12fa3b28f40eb828908f11ede21e64d6d81e79bd74d not found: ID does not exist" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.444505 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-km7zx" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.462518 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c825b05-679b-4869-846d-ba11b6cdda19" path="/var/lib/kubelet/pods/9c825b05-679b-4869-846d-ba11b6cdda19/volumes" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.515110 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.540927 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.552163 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 05 19:27:04 crc kubenswrapper[4828]: E1205 19:27:04.552635 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9" containerName="nova-metadata-metadata" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.552657 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9" containerName="nova-metadata-metadata" Dec 05 19:27:04 crc kubenswrapper[4828]: E1205 19:27:04.552701 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0caade90-0fc6-4fa9-9c58-8251b88cb827" containerName="nova-manage" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.552711 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="0caade90-0fc6-4fa9-9c58-8251b88cb827" containerName="nova-manage" Dec 05 19:27:04 crc kubenswrapper[4828]: E1205 19:27:04.552730 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c825b05-679b-4869-846d-ba11b6cdda19" containerName="dnsmasq-dns" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.552738 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c825b05-679b-4869-846d-ba11b6cdda19" containerName="dnsmasq-dns" Dec 05 19:27:04 crc kubenswrapper[4828]: E1205 19:27:04.552753 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9" containerName="nova-metadata-log" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.552761 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9" containerName="nova-metadata-log" Dec 05 19:27:04 crc kubenswrapper[4828]: E1205 19:27:04.552778 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c825b05-679b-4869-846d-ba11b6cdda19" containerName="init" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.552785 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c825b05-679b-4869-846d-ba11b6cdda19" containerName="init" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.553055 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="0caade90-0fc6-4fa9-9c58-8251b88cb827" containerName="nova-manage" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.553078 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9" containerName="nova-metadata-log" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.553093 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9" containerName="nova-metadata-metadata" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.553105 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c825b05-679b-4869-846d-ba11b6cdda19" containerName="dnsmasq-dns" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.554342 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.560378 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.560614 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.565661 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.571519 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0caade90-0fc6-4fa9-9c58-8251b88cb827-combined-ca-bundle\") pod \"0caade90-0fc6-4fa9-9c58-8251b88cb827\" (UID: \"0caade90-0fc6-4fa9-9c58-8251b88cb827\") " Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.571618 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0caade90-0fc6-4fa9-9c58-8251b88cb827-scripts\") pod \"0caade90-0fc6-4fa9-9c58-8251b88cb827\" (UID: \"0caade90-0fc6-4fa9-9c58-8251b88cb827\") " Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.571741 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0caade90-0fc6-4fa9-9c58-8251b88cb827-config-data\") pod \"0caade90-0fc6-4fa9-9c58-8251b88cb827\" (UID: \"0caade90-0fc6-4fa9-9c58-8251b88cb827\") " Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.571900 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqnft\" (UniqueName: \"kubernetes.io/projected/0caade90-0fc6-4fa9-9c58-8251b88cb827-kube-api-access-zqnft\") pod \"0caade90-0fc6-4fa9-9c58-8251b88cb827\" (UID: \"0caade90-0fc6-4fa9-9c58-8251b88cb827\") " Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.578790 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0caade90-0fc6-4fa9-9c58-8251b88cb827-kube-api-access-zqnft" (OuterVolumeSpecName: "kube-api-access-zqnft") pod "0caade90-0fc6-4fa9-9c58-8251b88cb827" (UID: "0caade90-0fc6-4fa9-9c58-8251b88cb827"). InnerVolumeSpecName "kube-api-access-zqnft". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.578984 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0caade90-0fc6-4fa9-9c58-8251b88cb827-scripts" (OuterVolumeSpecName: "scripts") pod "0caade90-0fc6-4fa9-9c58-8251b88cb827" (UID: "0caade90-0fc6-4fa9-9c58-8251b88cb827"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.603688 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0caade90-0fc6-4fa9-9c58-8251b88cb827-config-data" (OuterVolumeSpecName: "config-data") pod "0caade90-0fc6-4fa9-9c58-8251b88cb827" (UID: "0caade90-0fc6-4fa9-9c58-8251b88cb827"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.606525 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0caade90-0fc6-4fa9-9c58-8251b88cb827-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0caade90-0fc6-4fa9-9c58-8251b88cb827" (UID: "0caade90-0fc6-4fa9-9c58-8251b88cb827"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.674050 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfbebd92-0203-4e5c-9486-e749804c9a89-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bfbebd92-0203-4e5c-9486-e749804c9a89\") " pod="openstack/nova-metadata-0" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.674109 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfbebd92-0203-4e5c-9486-e749804c9a89-logs\") pod \"nova-metadata-0\" (UID: \"bfbebd92-0203-4e5c-9486-e749804c9a89\") " pod="openstack/nova-metadata-0" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.674147 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfbebd92-0203-4e5c-9486-e749804c9a89-config-data\") pod \"nova-metadata-0\" (UID: \"bfbebd92-0203-4e5c-9486-e749804c9a89\") " pod="openstack/nova-metadata-0" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.674173 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfbebd92-0203-4e5c-9486-e749804c9a89-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bfbebd92-0203-4e5c-9486-e749804c9a89\") " pod="openstack/nova-metadata-0" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.674230 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz7ch\" (UniqueName: \"kubernetes.io/projected/bfbebd92-0203-4e5c-9486-e749804c9a89-kube-api-access-zz7ch\") pod \"nova-metadata-0\" (UID: \"bfbebd92-0203-4e5c-9486-e749804c9a89\") " pod="openstack/nova-metadata-0" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.674371 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0caade90-0fc6-4fa9-9c58-8251b88cb827-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.674388 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqnft\" (UniqueName: \"kubernetes.io/projected/0caade90-0fc6-4fa9-9c58-8251b88cb827-kube-api-access-zqnft\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.674402 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0caade90-0fc6-4fa9-9c58-8251b88cb827-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.674415 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0caade90-0fc6-4fa9-9c58-8251b88cb827-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.775858 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfbebd92-0203-4e5c-9486-e749804c9a89-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bfbebd92-0203-4e5c-9486-e749804c9a89\") " pod="openstack/nova-metadata-0" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.776160 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfbebd92-0203-4e5c-9486-e749804c9a89-logs\") pod \"nova-metadata-0\" (UID: \"bfbebd92-0203-4e5c-9486-e749804c9a89\") " pod="openstack/nova-metadata-0" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.776190 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfbebd92-0203-4e5c-9486-e749804c9a89-config-data\") pod \"nova-metadata-0\" (UID: \"bfbebd92-0203-4e5c-9486-e749804c9a89\") " pod="openstack/nova-metadata-0" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.776232 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfbebd92-0203-4e5c-9486-e749804c9a89-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bfbebd92-0203-4e5c-9486-e749804c9a89\") " pod="openstack/nova-metadata-0" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.776282 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz7ch\" (UniqueName: \"kubernetes.io/projected/bfbebd92-0203-4e5c-9486-e749804c9a89-kube-api-access-zz7ch\") pod \"nova-metadata-0\" (UID: \"bfbebd92-0203-4e5c-9486-e749804c9a89\") " pod="openstack/nova-metadata-0" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.776726 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfbebd92-0203-4e5c-9486-e749804c9a89-logs\") pod \"nova-metadata-0\" (UID: \"bfbebd92-0203-4e5c-9486-e749804c9a89\") " pod="openstack/nova-metadata-0" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.780283 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfbebd92-0203-4e5c-9486-e749804c9a89-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bfbebd92-0203-4e5c-9486-e749804c9a89\") " pod="openstack/nova-metadata-0" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.780324 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfbebd92-0203-4e5c-9486-e749804c9a89-config-data\") pod \"nova-metadata-0\" (UID: \"bfbebd92-0203-4e5c-9486-e749804c9a89\") " pod="openstack/nova-metadata-0" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.780841 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfbebd92-0203-4e5c-9486-e749804c9a89-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bfbebd92-0203-4e5c-9486-e749804c9a89\") " pod="openstack/nova-metadata-0" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.792115 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz7ch\" (UniqueName: \"kubernetes.io/projected/bfbebd92-0203-4e5c-9486-e749804c9a89-kube-api-access-zz7ch\") pod \"nova-metadata-0\" (UID: \"bfbebd92-0203-4e5c-9486-e749804c9a89\") " pod="openstack/nova-metadata-0" Dec 05 19:27:04 crc kubenswrapper[4828]: I1205 19:27:04.873303 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 19:27:05 crc kubenswrapper[4828]: I1205 19:27:05.176572 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-km7zx" event={"ID":"0caade90-0fc6-4fa9-9c58-8251b88cb827","Type":"ContainerDied","Data":"3758c5768ef6fbb6589eb8d6ab5ea7e789971620dd7ae9e536abff5df4572c88"} Dec 05 19:27:05 crc kubenswrapper[4828]: I1205 19:27:05.176610 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3758c5768ef6fbb6589eb8d6ab5ea7e789971620dd7ae9e536abff5df4572c88" Dec 05 19:27:05 crc kubenswrapper[4828]: I1205 19:27:05.176668 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-km7zx" Dec 05 19:27:05 crc kubenswrapper[4828]: I1205 19:27:05.560703 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 19:27:05 crc kubenswrapper[4828]: W1205 19:27:05.562008 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfbebd92_0203_4e5c_9486_e749804c9a89.slice/crio-4244bbf960742265effca4af02f04c38d142ed75aecf77e65f45c50316bb8f06 WatchSource:0}: Error finding container 4244bbf960742265effca4af02f04c38d142ed75aecf77e65f45c50316bb8f06: Status 404 returned error can't find the container with id 4244bbf960742265effca4af02f04c38d142ed75aecf77e65f45c50316bb8f06 Dec 05 19:27:05 crc kubenswrapper[4828]: I1205 19:27:05.642327 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 19:27:05 crc kubenswrapper[4828]: I1205 19:27:05.642634 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e187cd7d-903e-4396-b2b5-f0b87c944956" containerName="nova-api-log" containerID="cri-o://7b4799931ff74c993d60f731a84bcddd94085f0c50fc8054a1e280137b9f2cce" gracePeriod=30 Dec 05 19:27:05 crc kubenswrapper[4828]: I1205 19:27:05.642720 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e187cd7d-903e-4396-b2b5-f0b87c944956" containerName="nova-api-api" containerID="cri-o://159ecf844d67385b973ad8d60b1a79b8154e87f0117ba92d2c52a5ddb2449479" gracePeriod=30 Dec 05 19:27:05 crc kubenswrapper[4828]: I1205 19:27:05.652975 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 19:27:05 crc kubenswrapper[4828]: I1205 19:27:05.653200 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="429a780e-1367-4ebd-bfef-dddb23dfbcb0" containerName="nova-scheduler-scheduler" containerID="cri-o://9de7f94e6abf7e388ffc0f6bc986d6b034f4bf2ed0bbd611a1cf18d92ee7341c" gracePeriod=30 Dec 05 19:27:05 crc kubenswrapper[4828]: I1205 19:27:05.707788 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.188345 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.190603 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bfbebd92-0203-4e5c-9486-e749804c9a89","Type":"ContainerStarted","Data":"d008269bb48e8f2deaa3b2c89d56d8f4b25457f48ae63f5384a842e9a2c3a61b"} Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.190638 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bfbebd92-0203-4e5c-9486-e749804c9a89","Type":"ContainerStarted","Data":"4244bbf960742265effca4af02f04c38d142ed75aecf77e65f45c50316bb8f06"} Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.193060 4828 generic.go:334] "Generic (PLEG): container finished" podID="e187cd7d-903e-4396-b2b5-f0b87c944956" containerID="159ecf844d67385b973ad8d60b1a79b8154e87f0117ba92d2c52a5ddb2449479" exitCode=0 Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.193092 4828 generic.go:334] "Generic (PLEG): container finished" podID="e187cd7d-903e-4396-b2b5-f0b87c944956" containerID="7b4799931ff74c993d60f731a84bcddd94085f0c50fc8054a1e280137b9f2cce" exitCode=143 Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.193111 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e187cd7d-903e-4396-b2b5-f0b87c944956","Type":"ContainerDied","Data":"159ecf844d67385b973ad8d60b1a79b8154e87f0117ba92d2c52a5ddb2449479"} Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.193132 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e187cd7d-903e-4396-b2b5-f0b87c944956","Type":"ContainerDied","Data":"7b4799931ff74c993d60f731a84bcddd94085f0c50fc8054a1e280137b9f2cce"} Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.193143 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e187cd7d-903e-4396-b2b5-f0b87c944956","Type":"ContainerDied","Data":"f7f90f23b99c7fe5f88787ee603f0f48b770e0682b6821b3e16834fdf1ec1179"} Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.193162 4828 scope.go:117] "RemoveContainer" containerID="159ecf844d67385b973ad8d60b1a79b8154e87f0117ba92d2c52a5ddb2449479" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.193278 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.239581 4828 scope.go:117] "RemoveContainer" containerID="7b4799931ff74c993d60f731a84bcddd94085f0c50fc8054a1e280137b9f2cce" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.284976 4828 scope.go:117] "RemoveContainer" containerID="159ecf844d67385b973ad8d60b1a79b8154e87f0117ba92d2c52a5ddb2449479" Dec 05 19:27:06 crc kubenswrapper[4828]: E1205 19:27:06.285421 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"159ecf844d67385b973ad8d60b1a79b8154e87f0117ba92d2c52a5ddb2449479\": container with ID starting with 159ecf844d67385b973ad8d60b1a79b8154e87f0117ba92d2c52a5ddb2449479 not found: ID does not exist" containerID="159ecf844d67385b973ad8d60b1a79b8154e87f0117ba92d2c52a5ddb2449479" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.285473 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"159ecf844d67385b973ad8d60b1a79b8154e87f0117ba92d2c52a5ddb2449479"} err="failed to get container status \"159ecf844d67385b973ad8d60b1a79b8154e87f0117ba92d2c52a5ddb2449479\": rpc error: code = NotFound desc = could not find container \"159ecf844d67385b973ad8d60b1a79b8154e87f0117ba92d2c52a5ddb2449479\": container with ID starting with 159ecf844d67385b973ad8d60b1a79b8154e87f0117ba92d2c52a5ddb2449479 not found: ID does not exist" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.285506 4828 scope.go:117] "RemoveContainer" containerID="7b4799931ff74c993d60f731a84bcddd94085f0c50fc8054a1e280137b9f2cce" Dec 05 19:27:06 crc kubenswrapper[4828]: E1205 19:27:06.285953 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b4799931ff74c993d60f731a84bcddd94085f0c50fc8054a1e280137b9f2cce\": container with ID starting with 7b4799931ff74c993d60f731a84bcddd94085f0c50fc8054a1e280137b9f2cce not found: ID does not exist" containerID="7b4799931ff74c993d60f731a84bcddd94085f0c50fc8054a1e280137b9f2cce" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.285998 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b4799931ff74c993d60f731a84bcddd94085f0c50fc8054a1e280137b9f2cce"} err="failed to get container status \"7b4799931ff74c993d60f731a84bcddd94085f0c50fc8054a1e280137b9f2cce\": rpc error: code = NotFound desc = could not find container \"7b4799931ff74c993d60f731a84bcddd94085f0c50fc8054a1e280137b9f2cce\": container with ID starting with 7b4799931ff74c993d60f731a84bcddd94085f0c50fc8054a1e280137b9f2cce not found: ID does not exist" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.286026 4828 scope.go:117] "RemoveContainer" containerID="159ecf844d67385b973ad8d60b1a79b8154e87f0117ba92d2c52a5ddb2449479" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.286242 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"159ecf844d67385b973ad8d60b1a79b8154e87f0117ba92d2c52a5ddb2449479"} err="failed to get container status \"159ecf844d67385b973ad8d60b1a79b8154e87f0117ba92d2c52a5ddb2449479\": rpc error: code = NotFound desc = could not find container \"159ecf844d67385b973ad8d60b1a79b8154e87f0117ba92d2c52a5ddb2449479\": container with ID starting with 159ecf844d67385b973ad8d60b1a79b8154e87f0117ba92d2c52a5ddb2449479 not found: ID does not exist" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.286271 4828 scope.go:117] "RemoveContainer" containerID="7b4799931ff74c993d60f731a84bcddd94085f0c50fc8054a1e280137b9f2cce" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.286470 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b4799931ff74c993d60f731a84bcddd94085f0c50fc8054a1e280137b9f2cce"} err="failed to get container status \"7b4799931ff74c993d60f731a84bcddd94085f0c50fc8054a1e280137b9f2cce\": rpc error: code = NotFound desc = could not find container \"7b4799931ff74c993d60f731a84bcddd94085f0c50fc8054a1e280137b9f2cce\": container with ID starting with 7b4799931ff74c993d60f731a84bcddd94085f0c50fc8054a1e280137b9f2cce not found: ID does not exist" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.301549 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5bnl\" (UniqueName: \"kubernetes.io/projected/e187cd7d-903e-4396-b2b5-f0b87c944956-kube-api-access-k5bnl\") pod \"e187cd7d-903e-4396-b2b5-f0b87c944956\" (UID: \"e187cd7d-903e-4396-b2b5-f0b87c944956\") " Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.301736 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e187cd7d-903e-4396-b2b5-f0b87c944956-logs\") pod \"e187cd7d-903e-4396-b2b5-f0b87c944956\" (UID: \"e187cd7d-903e-4396-b2b5-f0b87c944956\") " Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.302260 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e187cd7d-903e-4396-b2b5-f0b87c944956-logs" (OuterVolumeSpecName: "logs") pod "e187cd7d-903e-4396-b2b5-f0b87c944956" (UID: "e187cd7d-903e-4396-b2b5-f0b87c944956"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.302361 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e187cd7d-903e-4396-b2b5-f0b87c944956-config-data\") pod \"e187cd7d-903e-4396-b2b5-f0b87c944956\" (UID: \"e187cd7d-903e-4396-b2b5-f0b87c944956\") " Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.302434 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e187cd7d-903e-4396-b2b5-f0b87c944956-combined-ca-bundle\") pod \"e187cd7d-903e-4396-b2b5-f0b87c944956\" (UID: \"e187cd7d-903e-4396-b2b5-f0b87c944956\") " Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.302975 4828 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e187cd7d-903e-4396-b2b5-f0b87c944956-logs\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.306683 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e187cd7d-903e-4396-b2b5-f0b87c944956-kube-api-access-k5bnl" (OuterVolumeSpecName: "kube-api-access-k5bnl") pod "e187cd7d-903e-4396-b2b5-f0b87c944956" (UID: "e187cd7d-903e-4396-b2b5-f0b87c944956"). InnerVolumeSpecName "kube-api-access-k5bnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.336927 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e187cd7d-903e-4396-b2b5-f0b87c944956-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e187cd7d-903e-4396-b2b5-f0b87c944956" (UID: "e187cd7d-903e-4396-b2b5-f0b87c944956"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.343793 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e187cd7d-903e-4396-b2b5-f0b87c944956-config-data" (OuterVolumeSpecName: "config-data") pod "e187cd7d-903e-4396-b2b5-f0b87c944956" (UID: "e187cd7d-903e-4396-b2b5-f0b87c944956"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.404973 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e187cd7d-903e-4396-b2b5-f0b87c944956-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.405006 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e187cd7d-903e-4396-b2b5-f0b87c944956-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.405254 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5bnl\" (UniqueName: \"kubernetes.io/projected/e187cd7d-903e-4396-b2b5-f0b87c944956-kube-api-access-k5bnl\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.472504 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9" path="/var/lib/kubelet/pods/bbaa88d0-9e3f-40ab-8f04-7abcbc3ed7d9/volumes" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.533270 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.551943 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.582004 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 05 19:27:06 crc kubenswrapper[4828]: E1205 19:27:06.582361 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e187cd7d-903e-4396-b2b5-f0b87c944956" containerName="nova-api-log" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.582380 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="e187cd7d-903e-4396-b2b5-f0b87c944956" containerName="nova-api-log" Dec 05 19:27:06 crc kubenswrapper[4828]: E1205 19:27:06.582410 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e187cd7d-903e-4396-b2b5-f0b87c944956" containerName="nova-api-api" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.582416 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="e187cd7d-903e-4396-b2b5-f0b87c944956" containerName="nova-api-api" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.582589 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="e187cd7d-903e-4396-b2b5-f0b87c944956" containerName="nova-api-log" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.582616 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="e187cd7d-903e-4396-b2b5-f0b87c944956" containerName="nova-api-api" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.583750 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.589584 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.595982 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.713796 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbwrr\" (UniqueName: \"kubernetes.io/projected/4e71cb60-681c-465a-b651-1a5cd566baa9-kube-api-access-cbwrr\") pod \"nova-api-0\" (UID: \"4e71cb60-681c-465a-b651-1a5cd566baa9\") " pod="openstack/nova-api-0" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.714088 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e71cb60-681c-465a-b651-1a5cd566baa9-config-data\") pod \"nova-api-0\" (UID: \"4e71cb60-681c-465a-b651-1a5cd566baa9\") " pod="openstack/nova-api-0" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.714268 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e71cb60-681c-465a-b651-1a5cd566baa9-logs\") pod \"nova-api-0\" (UID: \"4e71cb60-681c-465a-b651-1a5cd566baa9\") " pod="openstack/nova-api-0" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.714359 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e71cb60-681c-465a-b651-1a5cd566baa9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4e71cb60-681c-465a-b651-1a5cd566baa9\") " pod="openstack/nova-api-0" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.817056 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbwrr\" (UniqueName: \"kubernetes.io/projected/4e71cb60-681c-465a-b651-1a5cd566baa9-kube-api-access-cbwrr\") pod \"nova-api-0\" (UID: \"4e71cb60-681c-465a-b651-1a5cd566baa9\") " pod="openstack/nova-api-0" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.817185 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e71cb60-681c-465a-b651-1a5cd566baa9-config-data\") pod \"nova-api-0\" (UID: \"4e71cb60-681c-465a-b651-1a5cd566baa9\") " pod="openstack/nova-api-0" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.817260 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e71cb60-681c-465a-b651-1a5cd566baa9-logs\") pod \"nova-api-0\" (UID: \"4e71cb60-681c-465a-b651-1a5cd566baa9\") " pod="openstack/nova-api-0" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.817306 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e71cb60-681c-465a-b651-1a5cd566baa9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4e71cb60-681c-465a-b651-1a5cd566baa9\") " pod="openstack/nova-api-0" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.817859 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e71cb60-681c-465a-b651-1a5cd566baa9-logs\") pod \"nova-api-0\" (UID: \"4e71cb60-681c-465a-b651-1a5cd566baa9\") " pod="openstack/nova-api-0" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.823762 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e71cb60-681c-465a-b651-1a5cd566baa9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4e71cb60-681c-465a-b651-1a5cd566baa9\") " pod="openstack/nova-api-0" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.830611 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e71cb60-681c-465a-b651-1a5cd566baa9-config-data\") pod \"nova-api-0\" (UID: \"4e71cb60-681c-465a-b651-1a5cd566baa9\") " pod="openstack/nova-api-0" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.844661 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbwrr\" (UniqueName: \"kubernetes.io/projected/4e71cb60-681c-465a-b651-1a5cd566baa9-kube-api-access-cbwrr\") pod \"nova-api-0\" (UID: \"4e71cb60-681c-465a-b651-1a5cd566baa9\") " pod="openstack/nova-api-0" Dec 05 19:27:06 crc kubenswrapper[4828]: I1205 19:27:06.907790 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.027771 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.122363 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429a780e-1367-4ebd-bfef-dddb23dfbcb0-combined-ca-bundle\") pod \"429a780e-1367-4ebd-bfef-dddb23dfbcb0\" (UID: \"429a780e-1367-4ebd-bfef-dddb23dfbcb0\") " Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.122545 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnp8m\" (UniqueName: \"kubernetes.io/projected/429a780e-1367-4ebd-bfef-dddb23dfbcb0-kube-api-access-pnp8m\") pod \"429a780e-1367-4ebd-bfef-dddb23dfbcb0\" (UID: \"429a780e-1367-4ebd-bfef-dddb23dfbcb0\") " Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.122614 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/429a780e-1367-4ebd-bfef-dddb23dfbcb0-config-data\") pod \"429a780e-1367-4ebd-bfef-dddb23dfbcb0\" (UID: \"429a780e-1367-4ebd-bfef-dddb23dfbcb0\") " Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.128242 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/429a780e-1367-4ebd-bfef-dddb23dfbcb0-kube-api-access-pnp8m" (OuterVolumeSpecName: "kube-api-access-pnp8m") pod "429a780e-1367-4ebd-bfef-dddb23dfbcb0" (UID: "429a780e-1367-4ebd-bfef-dddb23dfbcb0"). InnerVolumeSpecName "kube-api-access-pnp8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.152941 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/429a780e-1367-4ebd-bfef-dddb23dfbcb0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "429a780e-1367-4ebd-bfef-dddb23dfbcb0" (UID: "429a780e-1367-4ebd-bfef-dddb23dfbcb0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.156211 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/429a780e-1367-4ebd-bfef-dddb23dfbcb0-config-data" (OuterVolumeSpecName: "config-data") pod "429a780e-1367-4ebd-bfef-dddb23dfbcb0" (UID: "429a780e-1367-4ebd-bfef-dddb23dfbcb0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.204721 4828 generic.go:334] "Generic (PLEG): container finished" podID="429a780e-1367-4ebd-bfef-dddb23dfbcb0" containerID="9de7f94e6abf7e388ffc0f6bc986d6b034f4bf2ed0bbd611a1cf18d92ee7341c" exitCode=0 Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.204790 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.204896 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"429a780e-1367-4ebd-bfef-dddb23dfbcb0","Type":"ContainerDied","Data":"9de7f94e6abf7e388ffc0f6bc986d6b034f4bf2ed0bbd611a1cf18d92ee7341c"} Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.204962 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"429a780e-1367-4ebd-bfef-dddb23dfbcb0","Type":"ContainerDied","Data":"11ba76cdd5191007dfaf07d033386d09fdb0c1303c6fd0f42a13296488dcf059"} Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.205009 4828 scope.go:117] "RemoveContainer" containerID="9de7f94e6abf7e388ffc0f6bc986d6b034f4bf2ed0bbd611a1cf18d92ee7341c" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.207402 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bfbebd92-0203-4e5c-9486-e749804c9a89","Type":"ContainerStarted","Data":"413384b48c2ccd2474a234be01f425684fa308d5a86ed5c4b5dbe4e16f55dd49"} Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.207542 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bfbebd92-0203-4e5c-9486-e749804c9a89" containerName="nova-metadata-log" containerID="cri-o://d008269bb48e8f2deaa3b2c89d56d8f4b25457f48ae63f5384a842e9a2c3a61b" gracePeriod=30 Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.207856 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bfbebd92-0203-4e5c-9486-e749804c9a89" containerName="nova-metadata-metadata" containerID="cri-o://413384b48c2ccd2474a234be01f425684fa308d5a86ed5c4b5dbe4e16f55dd49" gracePeriod=30 Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.212073 4828 generic.go:334] "Generic (PLEG): container finished" podID="f832ed19-2e34-439e-bb52-37b2919b810e" containerID="86f70020404684ce0a0e17ce7204128d13b3cb7c22ace5acbdf319b82d377102" exitCode=0 Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.212145 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jzjgf" event={"ID":"f832ed19-2e34-439e-bb52-37b2919b810e","Type":"ContainerDied","Data":"86f70020404684ce0a0e17ce7204128d13b3cb7c22ace5acbdf319b82d377102"} Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.225471 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429a780e-1367-4ebd-bfef-dddb23dfbcb0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.225688 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnp8m\" (UniqueName: \"kubernetes.io/projected/429a780e-1367-4ebd-bfef-dddb23dfbcb0-kube-api-access-pnp8m\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.225702 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/429a780e-1367-4ebd-bfef-dddb23dfbcb0-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.241069 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.241047137 podStartE2EDuration="3.241047137s" podCreationTimestamp="2025-12-05 19:27:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:27:07.237005337 +0000 UTC m=+1405.132227653" watchObservedRunningTime="2025-12-05 19:27:07.241047137 +0000 UTC m=+1405.136269443" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.241648 4828 scope.go:117] "RemoveContainer" containerID="9de7f94e6abf7e388ffc0f6bc986d6b034f4bf2ed0bbd611a1cf18d92ee7341c" Dec 05 19:27:07 crc kubenswrapper[4828]: E1205 19:27:07.242221 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9de7f94e6abf7e388ffc0f6bc986d6b034f4bf2ed0bbd611a1cf18d92ee7341c\": container with ID starting with 9de7f94e6abf7e388ffc0f6bc986d6b034f4bf2ed0bbd611a1cf18d92ee7341c not found: ID does not exist" containerID="9de7f94e6abf7e388ffc0f6bc986d6b034f4bf2ed0bbd611a1cf18d92ee7341c" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.242262 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9de7f94e6abf7e388ffc0f6bc986d6b034f4bf2ed0bbd611a1cf18d92ee7341c"} err="failed to get container status \"9de7f94e6abf7e388ffc0f6bc986d6b034f4bf2ed0bbd611a1cf18d92ee7341c\": rpc error: code = NotFound desc = could not find container \"9de7f94e6abf7e388ffc0f6bc986d6b034f4bf2ed0bbd611a1cf18d92ee7341c\": container with ID starting with 9de7f94e6abf7e388ffc0f6bc986d6b034f4bf2ed0bbd611a1cf18d92ee7341c not found: ID does not exist" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.265717 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.281298 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.290785 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 19:27:07 crc kubenswrapper[4828]: E1205 19:27:07.291407 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="429a780e-1367-4ebd-bfef-dddb23dfbcb0" containerName="nova-scheduler-scheduler" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.291498 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="429a780e-1367-4ebd-bfef-dddb23dfbcb0" containerName="nova-scheduler-scheduler" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.291772 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="429a780e-1367-4ebd-bfef-dddb23dfbcb0" containerName="nova-scheduler-scheduler" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.292560 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.305654 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.327883 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6951b78-5bbf-48d5-9edc-efbde0f5e939-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f6951b78-5bbf-48d5-9edc-efbde0f5e939\") " pod="openstack/nova-scheduler-0" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.327951 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlm8p\" (UniqueName: \"kubernetes.io/projected/f6951b78-5bbf-48d5-9edc-efbde0f5e939-kube-api-access-dlm8p\") pod \"nova-scheduler-0\" (UID: \"f6951b78-5bbf-48d5-9edc-efbde0f5e939\") " pod="openstack/nova-scheduler-0" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.327992 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6951b78-5bbf-48d5-9edc-efbde0f5e939-config-data\") pod \"nova-scheduler-0\" (UID: \"f6951b78-5bbf-48d5-9edc-efbde0f5e939\") " pod="openstack/nova-scheduler-0" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.336775 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.389561 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 19:27:07 crc kubenswrapper[4828]: W1205 19:27:07.429334 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e71cb60_681c_465a_b651_1a5cd566baa9.slice/crio-28b8eca685e955b1a906c56b0f36bbfc061465cf816e5432d3d2e3fdf92ae537 WatchSource:0}: Error finding container 28b8eca685e955b1a906c56b0f36bbfc061465cf816e5432d3d2e3fdf92ae537: Status 404 returned error can't find the container with id 28b8eca685e955b1a906c56b0f36bbfc061465cf816e5432d3d2e3fdf92ae537 Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.429878 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6951b78-5bbf-48d5-9edc-efbde0f5e939-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f6951b78-5bbf-48d5-9edc-efbde0f5e939\") " pod="openstack/nova-scheduler-0" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.429943 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlm8p\" (UniqueName: \"kubernetes.io/projected/f6951b78-5bbf-48d5-9edc-efbde0f5e939-kube-api-access-dlm8p\") pod \"nova-scheduler-0\" (UID: \"f6951b78-5bbf-48d5-9edc-efbde0f5e939\") " pod="openstack/nova-scheduler-0" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.430010 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6951b78-5bbf-48d5-9edc-efbde0f5e939-config-data\") pod \"nova-scheduler-0\" (UID: \"f6951b78-5bbf-48d5-9edc-efbde0f5e939\") " pod="openstack/nova-scheduler-0" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.434663 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6951b78-5bbf-48d5-9edc-efbde0f5e939-config-data\") pod \"nova-scheduler-0\" (UID: \"f6951b78-5bbf-48d5-9edc-efbde0f5e939\") " pod="openstack/nova-scheduler-0" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.434918 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6951b78-5bbf-48d5-9edc-efbde0f5e939-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f6951b78-5bbf-48d5-9edc-efbde0f5e939\") " pod="openstack/nova-scheduler-0" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.447362 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlm8p\" (UniqueName: \"kubernetes.io/projected/f6951b78-5bbf-48d5-9edc-efbde0f5e939-kube-api-access-dlm8p\") pod \"nova-scheduler-0\" (UID: \"f6951b78-5bbf-48d5-9edc-efbde0f5e939\") " pod="openstack/nova-scheduler-0" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.698278 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.733871 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.742081 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz7ch\" (UniqueName: \"kubernetes.io/projected/bfbebd92-0203-4e5c-9486-e749804c9a89-kube-api-access-zz7ch\") pod \"bfbebd92-0203-4e5c-9486-e749804c9a89\" (UID: \"bfbebd92-0203-4e5c-9486-e749804c9a89\") " Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.746511 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfbebd92-0203-4e5c-9486-e749804c9a89-logs\") pod \"bfbebd92-0203-4e5c-9486-e749804c9a89\" (UID: \"bfbebd92-0203-4e5c-9486-e749804c9a89\") " Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.746635 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfbebd92-0203-4e5c-9486-e749804c9a89-combined-ca-bundle\") pod \"bfbebd92-0203-4e5c-9486-e749804c9a89\" (UID: \"bfbebd92-0203-4e5c-9486-e749804c9a89\") " Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.746718 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfbebd92-0203-4e5c-9486-e749804c9a89-config-data\") pod \"bfbebd92-0203-4e5c-9486-e749804c9a89\" (UID: \"bfbebd92-0203-4e5c-9486-e749804c9a89\") " Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.746738 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfbebd92-0203-4e5c-9486-e749804c9a89-nova-metadata-tls-certs\") pod \"bfbebd92-0203-4e5c-9486-e749804c9a89\" (UID: \"bfbebd92-0203-4e5c-9486-e749804c9a89\") " Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.748122 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfbebd92-0203-4e5c-9486-e749804c9a89-logs" (OuterVolumeSpecName: "logs") pod "bfbebd92-0203-4e5c-9486-e749804c9a89" (UID: "bfbebd92-0203-4e5c-9486-e749804c9a89"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.752462 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfbebd92-0203-4e5c-9486-e749804c9a89-kube-api-access-zz7ch" (OuterVolumeSpecName: "kube-api-access-zz7ch") pod "bfbebd92-0203-4e5c-9486-e749804c9a89" (UID: "bfbebd92-0203-4e5c-9486-e749804c9a89"). InnerVolumeSpecName "kube-api-access-zz7ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.798002 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfbebd92-0203-4e5c-9486-e749804c9a89-config-data" (OuterVolumeSpecName: "config-data") pod "bfbebd92-0203-4e5c-9486-e749804c9a89" (UID: "bfbebd92-0203-4e5c-9486-e749804c9a89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.807889 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfbebd92-0203-4e5c-9486-e749804c9a89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfbebd92-0203-4e5c-9486-e749804c9a89" (UID: "bfbebd92-0203-4e5c-9486-e749804c9a89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.817332 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.833918 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfbebd92-0203-4e5c-9486-e749804c9a89-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "bfbebd92-0203-4e5c-9486-e749804c9a89" (UID: "bfbebd92-0203-4e5c-9486-e749804c9a89"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.850133 4828 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfbebd92-0203-4e5c-9486-e749804c9a89-logs\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.850167 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfbebd92-0203-4e5c-9486-e749804c9a89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.850182 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfbebd92-0203-4e5c-9486-e749804c9a89-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.850194 4828 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfbebd92-0203-4e5c-9486-e749804c9a89-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:07 crc kubenswrapper[4828]: I1205 19:27:07.850206 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz7ch\" (UniqueName: \"kubernetes.io/projected/bfbebd92-0203-4e5c-9486-e749804c9a89-kube-api-access-zz7ch\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.199434 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 19:27:08 crc kubenswrapper[4828]: W1205 19:27:08.206252 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6951b78_5bbf_48d5_9edc_efbde0f5e939.slice/crio-3b1ba15bb19f489461c6e81f08c9d93ff6ed397abece84ab8882925742b2f739 WatchSource:0}: Error finding container 3b1ba15bb19f489461c6e81f08c9d93ff6ed397abece84ab8882925742b2f739: Status 404 returned error can't find the container with id 3b1ba15bb19f489461c6e81f08c9d93ff6ed397abece84ab8882925742b2f739 Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.226702 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4e71cb60-681c-465a-b651-1a5cd566baa9","Type":"ContainerStarted","Data":"c501c424d717f67c2b3e662c00f5ed49e7f42a3095552e47de519ac3935c8559"} Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.226751 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4e71cb60-681c-465a-b651-1a5cd566baa9","Type":"ContainerStarted","Data":"1f03f8da067f45a2544b8279f42952f24810a5f9e05414d6d126bee9f451a6a7"} Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.226767 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4e71cb60-681c-465a-b651-1a5cd566baa9","Type":"ContainerStarted","Data":"28b8eca685e955b1a906c56b0f36bbfc061465cf816e5432d3d2e3fdf92ae537"} Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.227982 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f6951b78-5bbf-48d5-9edc-efbde0f5e939","Type":"ContainerStarted","Data":"3b1ba15bb19f489461c6e81f08c9d93ff6ed397abece84ab8882925742b2f739"} Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.237374 4828 generic.go:334] "Generic (PLEG): container finished" podID="bfbebd92-0203-4e5c-9486-e749804c9a89" containerID="413384b48c2ccd2474a234be01f425684fa308d5a86ed5c4b5dbe4e16f55dd49" exitCode=0 Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.237406 4828 generic.go:334] "Generic (PLEG): container finished" podID="bfbebd92-0203-4e5c-9486-e749804c9a89" containerID="d008269bb48e8f2deaa3b2c89d56d8f4b25457f48ae63f5384a842e9a2c3a61b" exitCode=143 Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.237445 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bfbebd92-0203-4e5c-9486-e749804c9a89","Type":"ContainerDied","Data":"413384b48c2ccd2474a234be01f425684fa308d5a86ed5c4b5dbe4e16f55dd49"} Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.237497 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bfbebd92-0203-4e5c-9486-e749804c9a89","Type":"ContainerDied","Data":"d008269bb48e8f2deaa3b2c89d56d8f4b25457f48ae63f5384a842e9a2c3a61b"} Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.237508 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bfbebd92-0203-4e5c-9486-e749804c9a89","Type":"ContainerDied","Data":"4244bbf960742265effca4af02f04c38d142ed75aecf77e65f45c50316bb8f06"} Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.237523 4828 scope.go:117] "RemoveContainer" containerID="413384b48c2ccd2474a234be01f425684fa308d5a86ed5c4b5dbe4e16f55dd49" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.237647 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.359015 4828 scope.go:117] "RemoveContainer" containerID="d008269bb48e8f2deaa3b2c89d56d8f4b25457f48ae63f5384a842e9a2c3a61b" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.360049 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.360028723 podStartE2EDuration="2.360028723s" podCreationTimestamp="2025-12-05 19:27:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:27:08.248607102 +0000 UTC m=+1406.143829408" watchObservedRunningTime="2025-12-05 19:27:08.360028723 +0000 UTC m=+1406.255251039" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.380496 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.407975 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.422166 4828 scope.go:117] "RemoveContainer" containerID="413384b48c2ccd2474a234be01f425684fa308d5a86ed5c4b5dbe4e16f55dd49" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.422619 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 05 19:27:08 crc kubenswrapper[4828]: E1205 19:27:08.423186 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfbebd92-0203-4e5c-9486-e749804c9a89" containerName="nova-metadata-log" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.423202 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfbebd92-0203-4e5c-9486-e749804c9a89" containerName="nova-metadata-log" Dec 05 19:27:08 crc kubenswrapper[4828]: E1205 19:27:08.423247 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfbebd92-0203-4e5c-9486-e749804c9a89" containerName="nova-metadata-metadata" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.423254 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfbebd92-0203-4e5c-9486-e749804c9a89" containerName="nova-metadata-metadata" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.423479 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfbebd92-0203-4e5c-9486-e749804c9a89" containerName="nova-metadata-metadata" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.423507 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfbebd92-0203-4e5c-9486-e749804c9a89" containerName="nova-metadata-log" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.425073 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 19:27:08 crc kubenswrapper[4828]: E1205 19:27:08.425782 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"413384b48c2ccd2474a234be01f425684fa308d5a86ed5c4b5dbe4e16f55dd49\": container with ID starting with 413384b48c2ccd2474a234be01f425684fa308d5a86ed5c4b5dbe4e16f55dd49 not found: ID does not exist" containerID="413384b48c2ccd2474a234be01f425684fa308d5a86ed5c4b5dbe4e16f55dd49" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.425927 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"413384b48c2ccd2474a234be01f425684fa308d5a86ed5c4b5dbe4e16f55dd49"} err="failed to get container status \"413384b48c2ccd2474a234be01f425684fa308d5a86ed5c4b5dbe4e16f55dd49\": rpc error: code = NotFound desc = could not find container \"413384b48c2ccd2474a234be01f425684fa308d5a86ed5c4b5dbe4e16f55dd49\": container with ID starting with 413384b48c2ccd2474a234be01f425684fa308d5a86ed5c4b5dbe4e16f55dd49 not found: ID does not exist" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.425959 4828 scope.go:117] "RemoveContainer" containerID="d008269bb48e8f2deaa3b2c89d56d8f4b25457f48ae63f5384a842e9a2c3a61b" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.428159 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.428969 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 05 19:27:08 crc kubenswrapper[4828]: E1205 19:27:08.432006 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d008269bb48e8f2deaa3b2c89d56d8f4b25457f48ae63f5384a842e9a2c3a61b\": container with ID starting with d008269bb48e8f2deaa3b2c89d56d8f4b25457f48ae63f5384a842e9a2c3a61b not found: ID does not exist" containerID="d008269bb48e8f2deaa3b2c89d56d8f4b25457f48ae63f5384a842e9a2c3a61b" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.432057 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d008269bb48e8f2deaa3b2c89d56d8f4b25457f48ae63f5384a842e9a2c3a61b"} err="failed to get container status \"d008269bb48e8f2deaa3b2c89d56d8f4b25457f48ae63f5384a842e9a2c3a61b\": rpc error: code = NotFound desc = could not find container \"d008269bb48e8f2deaa3b2c89d56d8f4b25457f48ae63f5384a842e9a2c3a61b\": container with ID starting with d008269bb48e8f2deaa3b2c89d56d8f4b25457f48ae63f5384a842e9a2c3a61b not found: ID does not exist" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.432089 4828 scope.go:117] "RemoveContainer" containerID="413384b48c2ccd2474a234be01f425684fa308d5a86ed5c4b5dbe4e16f55dd49" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.432852 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"413384b48c2ccd2474a234be01f425684fa308d5a86ed5c4b5dbe4e16f55dd49"} err="failed to get container status \"413384b48c2ccd2474a234be01f425684fa308d5a86ed5c4b5dbe4e16f55dd49\": rpc error: code = NotFound desc = could not find container \"413384b48c2ccd2474a234be01f425684fa308d5a86ed5c4b5dbe4e16f55dd49\": container with ID starting with 413384b48c2ccd2474a234be01f425684fa308d5a86ed5c4b5dbe4e16f55dd49 not found: ID does not exist" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.432895 4828 scope.go:117] "RemoveContainer" containerID="d008269bb48e8f2deaa3b2c89d56d8f4b25457f48ae63f5384a842e9a2c3a61b" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.433316 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d008269bb48e8f2deaa3b2c89d56d8f4b25457f48ae63f5384a842e9a2c3a61b"} err="failed to get container status \"d008269bb48e8f2deaa3b2c89d56d8f4b25457f48ae63f5384a842e9a2c3a61b\": rpc error: code = NotFound desc = could not find container \"d008269bb48e8f2deaa3b2c89d56d8f4b25457f48ae63f5384a842e9a2c3a61b\": container with ID starting with d008269bb48e8f2deaa3b2c89d56d8f4b25457f48ae63f5384a842e9a2c3a61b not found: ID does not exist" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.442102 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.472694 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="429a780e-1367-4ebd-bfef-dddb23dfbcb0" path="/var/lib/kubelet/pods/429a780e-1367-4ebd-bfef-dddb23dfbcb0/volumes" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.473734 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfbebd92-0203-4e5c-9486-e749804c9a89" path="/var/lib/kubelet/pods/bfbebd92-0203-4e5c-9486-e749804c9a89/volumes" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.474556 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e187cd7d-903e-4396-b2b5-f0b87c944956" path="/var/lib/kubelet/pods/e187cd7d-903e-4396-b2b5-f0b87c944956/volumes" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.519112 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jzjgf" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.562012 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcmdc\" (UniqueName: \"kubernetes.io/projected/12374de6-1d67-43ff-8067-319d86b0fe6b-kube-api-access-pcmdc\") pod \"nova-metadata-0\" (UID: \"12374de6-1d67-43ff-8067-319d86b0fe6b\") " pod="openstack/nova-metadata-0" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.562105 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12374de6-1d67-43ff-8067-319d86b0fe6b-config-data\") pod \"nova-metadata-0\" (UID: \"12374de6-1d67-43ff-8067-319d86b0fe6b\") " pod="openstack/nova-metadata-0" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.562139 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12374de6-1d67-43ff-8067-319d86b0fe6b-logs\") pod \"nova-metadata-0\" (UID: \"12374de6-1d67-43ff-8067-319d86b0fe6b\") " pod="openstack/nova-metadata-0" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.562164 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12374de6-1d67-43ff-8067-319d86b0fe6b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"12374de6-1d67-43ff-8067-319d86b0fe6b\") " pod="openstack/nova-metadata-0" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.562183 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/12374de6-1d67-43ff-8067-319d86b0fe6b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"12374de6-1d67-43ff-8067-319d86b0fe6b\") " pod="openstack/nova-metadata-0" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.663884 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f832ed19-2e34-439e-bb52-37b2919b810e-config-data\") pod \"f832ed19-2e34-439e-bb52-37b2919b810e\" (UID: \"f832ed19-2e34-439e-bb52-37b2919b810e\") " Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.664069 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f832ed19-2e34-439e-bb52-37b2919b810e-scripts\") pod \"f832ed19-2e34-439e-bb52-37b2919b810e\" (UID: \"f832ed19-2e34-439e-bb52-37b2919b810e\") " Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.664590 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f832ed19-2e34-439e-bb52-37b2919b810e-combined-ca-bundle\") pod \"f832ed19-2e34-439e-bb52-37b2919b810e\" (UID: \"f832ed19-2e34-439e-bb52-37b2919b810e\") " Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.665476 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djqjk\" (UniqueName: \"kubernetes.io/projected/f832ed19-2e34-439e-bb52-37b2919b810e-kube-api-access-djqjk\") pod \"f832ed19-2e34-439e-bb52-37b2919b810e\" (UID: \"f832ed19-2e34-439e-bb52-37b2919b810e\") " Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.665833 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcmdc\" (UniqueName: \"kubernetes.io/projected/12374de6-1d67-43ff-8067-319d86b0fe6b-kube-api-access-pcmdc\") pod \"nova-metadata-0\" (UID: \"12374de6-1d67-43ff-8067-319d86b0fe6b\") " pod="openstack/nova-metadata-0" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.665952 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12374de6-1d67-43ff-8067-319d86b0fe6b-config-data\") pod \"nova-metadata-0\" (UID: \"12374de6-1d67-43ff-8067-319d86b0fe6b\") " pod="openstack/nova-metadata-0" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.666003 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12374de6-1d67-43ff-8067-319d86b0fe6b-logs\") pod \"nova-metadata-0\" (UID: \"12374de6-1d67-43ff-8067-319d86b0fe6b\") " pod="openstack/nova-metadata-0" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.666043 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12374de6-1d67-43ff-8067-319d86b0fe6b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"12374de6-1d67-43ff-8067-319d86b0fe6b\") " pod="openstack/nova-metadata-0" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.666068 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/12374de6-1d67-43ff-8067-319d86b0fe6b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"12374de6-1d67-43ff-8067-319d86b0fe6b\") " pod="openstack/nova-metadata-0" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.666544 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12374de6-1d67-43ff-8067-319d86b0fe6b-logs\") pod \"nova-metadata-0\" (UID: \"12374de6-1d67-43ff-8067-319d86b0fe6b\") " pod="openstack/nova-metadata-0" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.669623 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f832ed19-2e34-439e-bb52-37b2919b810e-kube-api-access-djqjk" (OuterVolumeSpecName: "kube-api-access-djqjk") pod "f832ed19-2e34-439e-bb52-37b2919b810e" (UID: "f832ed19-2e34-439e-bb52-37b2919b810e"). InnerVolumeSpecName "kube-api-access-djqjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.671048 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/12374de6-1d67-43ff-8067-319d86b0fe6b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"12374de6-1d67-43ff-8067-319d86b0fe6b\") " pod="openstack/nova-metadata-0" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.672139 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12374de6-1d67-43ff-8067-319d86b0fe6b-config-data\") pod \"nova-metadata-0\" (UID: \"12374de6-1d67-43ff-8067-319d86b0fe6b\") " pod="openstack/nova-metadata-0" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.673039 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12374de6-1d67-43ff-8067-319d86b0fe6b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"12374de6-1d67-43ff-8067-319d86b0fe6b\") " pod="openstack/nova-metadata-0" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.677751 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f832ed19-2e34-439e-bb52-37b2919b810e-scripts" (OuterVolumeSpecName: "scripts") pod "f832ed19-2e34-439e-bb52-37b2919b810e" (UID: "f832ed19-2e34-439e-bb52-37b2919b810e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.684264 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcmdc\" (UniqueName: \"kubernetes.io/projected/12374de6-1d67-43ff-8067-319d86b0fe6b-kube-api-access-pcmdc\") pod \"nova-metadata-0\" (UID: \"12374de6-1d67-43ff-8067-319d86b0fe6b\") " pod="openstack/nova-metadata-0" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.703057 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f832ed19-2e34-439e-bb52-37b2919b810e-config-data" (OuterVolumeSpecName: "config-data") pod "f832ed19-2e34-439e-bb52-37b2919b810e" (UID: "f832ed19-2e34-439e-bb52-37b2919b810e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.710719 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f832ed19-2e34-439e-bb52-37b2919b810e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f832ed19-2e34-439e-bb52-37b2919b810e" (UID: "f832ed19-2e34-439e-bb52-37b2919b810e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.751478 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.768616 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f832ed19-2e34-439e-bb52-37b2919b810e-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.768660 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f832ed19-2e34-439e-bb52-37b2919b810e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.768676 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djqjk\" (UniqueName: \"kubernetes.io/projected/f832ed19-2e34-439e-bb52-37b2919b810e-kube-api-access-djqjk\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:08 crc kubenswrapper[4828]: I1205 19:27:08.768690 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f832ed19-2e34-439e-bb52-37b2919b810e-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:09 crc kubenswrapper[4828]: I1205 19:27:09.222861 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 19:27:09 crc kubenswrapper[4828]: W1205 19:27:09.241150 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12374de6_1d67_43ff_8067_319d86b0fe6b.slice/crio-e8d048c21d40895d54a5f1793b5911b74ddd6f985ebb6fa78afaf42e533c2d73 WatchSource:0}: Error finding container e8d048c21d40895d54a5f1793b5911b74ddd6f985ebb6fa78afaf42e533c2d73: Status 404 returned error can't find the container with id e8d048c21d40895d54a5f1793b5911b74ddd6f985ebb6fa78afaf42e533c2d73 Dec 05 19:27:09 crc kubenswrapper[4828]: I1205 19:27:09.266892 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jzjgf" Dec 05 19:27:09 crc kubenswrapper[4828]: I1205 19:27:09.266884 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jzjgf" event={"ID":"f832ed19-2e34-439e-bb52-37b2919b810e","Type":"ContainerDied","Data":"eef6bc961d151f3a06bc5ecaf09492d5b998e699103377f54e57f2222220a521"} Dec 05 19:27:09 crc kubenswrapper[4828]: I1205 19:27:09.267006 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eef6bc961d151f3a06bc5ecaf09492d5b998e699103377f54e57f2222220a521" Dec 05 19:27:09 crc kubenswrapper[4828]: I1205 19:27:09.268291 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"12374de6-1d67-43ff-8067-319d86b0fe6b","Type":"ContainerStarted","Data":"e8d048c21d40895d54a5f1793b5911b74ddd6f985ebb6fa78afaf42e533c2d73"} Dec 05 19:27:09 crc kubenswrapper[4828]: I1205 19:27:09.269916 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f6951b78-5bbf-48d5-9edc-efbde0f5e939","Type":"ContainerStarted","Data":"ca9df55c86c8dc29ae059e20ff584a65412b7ad2dcfd34b54e8efa3be4d07f06"} Dec 05 19:27:09 crc kubenswrapper[4828]: I1205 19:27:09.307124 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.307105215 podStartE2EDuration="2.307105215s" podCreationTimestamp="2025-12-05 19:27:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:27:09.289194071 +0000 UTC m=+1407.184416377" watchObservedRunningTime="2025-12-05 19:27:09.307105215 +0000 UTC m=+1407.202327521" Dec 05 19:27:09 crc kubenswrapper[4828]: I1205 19:27:09.354304 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 05 19:27:09 crc kubenswrapper[4828]: E1205 19:27:09.354778 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f832ed19-2e34-439e-bb52-37b2919b810e" containerName="nova-cell1-conductor-db-sync" Dec 05 19:27:09 crc kubenswrapper[4828]: I1205 19:27:09.354806 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f832ed19-2e34-439e-bb52-37b2919b810e" containerName="nova-cell1-conductor-db-sync" Dec 05 19:27:09 crc kubenswrapper[4828]: I1205 19:27:09.355051 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="f832ed19-2e34-439e-bb52-37b2919b810e" containerName="nova-cell1-conductor-db-sync" Dec 05 19:27:09 crc kubenswrapper[4828]: I1205 19:27:09.355655 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 05 19:27:09 crc kubenswrapper[4828]: I1205 19:27:09.358022 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 05 19:27:09 crc kubenswrapper[4828]: I1205 19:27:09.362100 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 05 19:27:09 crc kubenswrapper[4828]: I1205 19:27:09.491641 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f2a95e9-2c8f-4c04-b9f8-546e8a09aa7b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5f2a95e9-2c8f-4c04-b9f8-546e8a09aa7b\") " pod="openstack/nova-cell1-conductor-0" Dec 05 19:27:09 crc kubenswrapper[4828]: I1205 19:27:09.492008 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx9dg\" (UniqueName: \"kubernetes.io/projected/5f2a95e9-2c8f-4c04-b9f8-546e8a09aa7b-kube-api-access-cx9dg\") pod \"nova-cell1-conductor-0\" (UID: \"5f2a95e9-2c8f-4c04-b9f8-546e8a09aa7b\") " pod="openstack/nova-cell1-conductor-0" Dec 05 19:27:09 crc kubenswrapper[4828]: I1205 19:27:09.492095 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2a95e9-2c8f-4c04-b9f8-546e8a09aa7b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5f2a95e9-2c8f-4c04-b9f8-546e8a09aa7b\") " pod="openstack/nova-cell1-conductor-0" Dec 05 19:27:09 crc kubenswrapper[4828]: I1205 19:27:09.593930 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx9dg\" (UniqueName: \"kubernetes.io/projected/5f2a95e9-2c8f-4c04-b9f8-546e8a09aa7b-kube-api-access-cx9dg\") pod \"nova-cell1-conductor-0\" (UID: \"5f2a95e9-2c8f-4c04-b9f8-546e8a09aa7b\") " pod="openstack/nova-cell1-conductor-0" Dec 05 19:27:09 crc kubenswrapper[4828]: I1205 19:27:09.593980 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2a95e9-2c8f-4c04-b9f8-546e8a09aa7b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5f2a95e9-2c8f-4c04-b9f8-546e8a09aa7b\") " pod="openstack/nova-cell1-conductor-0" Dec 05 19:27:09 crc kubenswrapper[4828]: I1205 19:27:09.594093 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f2a95e9-2c8f-4c04-b9f8-546e8a09aa7b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5f2a95e9-2c8f-4c04-b9f8-546e8a09aa7b\") " pod="openstack/nova-cell1-conductor-0" Dec 05 19:27:09 crc kubenswrapper[4828]: I1205 19:27:09.597500 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f2a95e9-2c8f-4c04-b9f8-546e8a09aa7b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5f2a95e9-2c8f-4c04-b9f8-546e8a09aa7b\") " pod="openstack/nova-cell1-conductor-0" Dec 05 19:27:09 crc kubenswrapper[4828]: I1205 19:27:09.605139 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2a95e9-2c8f-4c04-b9f8-546e8a09aa7b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5f2a95e9-2c8f-4c04-b9f8-546e8a09aa7b\") " pod="openstack/nova-cell1-conductor-0" Dec 05 19:27:09 crc kubenswrapper[4828]: I1205 19:27:09.610454 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx9dg\" (UniqueName: \"kubernetes.io/projected/5f2a95e9-2c8f-4c04-b9f8-546e8a09aa7b-kube-api-access-cx9dg\") pod \"nova-cell1-conductor-0\" (UID: \"5f2a95e9-2c8f-4c04-b9f8-546e8a09aa7b\") " pod="openstack/nova-cell1-conductor-0" Dec 05 19:27:09 crc kubenswrapper[4828]: I1205 19:27:09.688980 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 05 19:27:10 crc kubenswrapper[4828]: I1205 19:27:10.136963 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 05 19:27:10 crc kubenswrapper[4828]: W1205 19:27:10.157375 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f2a95e9_2c8f_4c04_b9f8_546e8a09aa7b.slice/crio-cdee1d156a05f4934c0f0efac4d1a4c762d33ef4109ed568ee7393b8c3b24452 WatchSource:0}: Error finding container cdee1d156a05f4934c0f0efac4d1a4c762d33ef4109ed568ee7393b8c3b24452: Status 404 returned error can't find the container with id cdee1d156a05f4934c0f0efac4d1a4c762d33ef4109ed568ee7393b8c3b24452 Dec 05 19:27:10 crc kubenswrapper[4828]: I1205 19:27:10.283899 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"12374de6-1d67-43ff-8067-319d86b0fe6b","Type":"ContainerStarted","Data":"9aaf81bedf963a03454b7b424af81608614ee20ee7343e4d7ebbdac81fc8b7cd"} Dec 05 19:27:10 crc kubenswrapper[4828]: I1205 19:27:10.284183 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"12374de6-1d67-43ff-8067-319d86b0fe6b","Type":"ContainerStarted","Data":"f50cdcdbf2e67aa74f91a3edeff38274e6dc02a8c744cc4d93f6acd8a94a0d20"} Dec 05 19:27:10 crc kubenswrapper[4828]: I1205 19:27:10.286974 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5f2a95e9-2c8f-4c04-b9f8-546e8a09aa7b","Type":"ContainerStarted","Data":"cdee1d156a05f4934c0f0efac4d1a4c762d33ef4109ed568ee7393b8c3b24452"} Dec 05 19:27:10 crc kubenswrapper[4828]: I1205 19:27:10.301285 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.301265069 podStartE2EDuration="2.301265069s" podCreationTimestamp="2025-12-05 19:27:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:27:10.299318377 +0000 UTC m=+1408.194540713" watchObservedRunningTime="2025-12-05 19:27:10.301265069 +0000 UTC m=+1408.196487375" Dec 05 19:27:11 crc kubenswrapper[4828]: I1205 19:27:11.297358 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5f2a95e9-2c8f-4c04-b9f8-546e8a09aa7b","Type":"ContainerStarted","Data":"bd58425bd46aa1c012282febb3b944c7f8c33abe9d261c8c20b5660e9ec96ab3"} Dec 05 19:27:11 crc kubenswrapper[4828]: I1205 19:27:11.297721 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 05 19:27:11 crc kubenswrapper[4828]: I1205 19:27:11.320035 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.320021238 podStartE2EDuration="2.320021238s" podCreationTimestamp="2025-12-05 19:27:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:27:11.318698112 +0000 UTC m=+1409.213920418" watchObservedRunningTime="2025-12-05 19:27:11.320021238 +0000 UTC m=+1409.215243544" Dec 05 19:27:12 crc kubenswrapper[4828]: I1205 19:27:12.345637 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-47hsp" Dec 05 19:27:12 crc kubenswrapper[4828]: I1205 19:27:12.346124 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-47hsp" Dec 05 19:27:12 crc kubenswrapper[4828]: I1205 19:27:12.404284 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-47hsp" Dec 05 19:27:12 crc kubenswrapper[4828]: I1205 19:27:12.734814 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 05 19:27:13 crc kubenswrapper[4828]: I1205 19:27:13.391813 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-47hsp" Dec 05 19:27:13 crc kubenswrapper[4828]: I1205 19:27:13.458070 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-47hsp"] Dec 05 19:27:13 crc kubenswrapper[4828]: I1205 19:27:13.752157 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 19:27:13 crc kubenswrapper[4828]: I1205 19:27:13.752219 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 19:27:15 crc kubenswrapper[4828]: I1205 19:27:15.336031 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-47hsp" podUID="81989aca-451d-4a70-b683-52f8675c3f12" containerName="registry-server" containerID="cri-o://4263a313afd2d77ff64ed672d35424aa179a2fb0fa475e0543fb5c325320fbc8" gracePeriod=2 Dec 05 19:27:15 crc kubenswrapper[4828]: I1205 19:27:15.813872 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-47hsp" Dec 05 19:27:15 crc kubenswrapper[4828]: I1205 19:27:15.915354 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81989aca-451d-4a70-b683-52f8675c3f12-catalog-content\") pod \"81989aca-451d-4a70-b683-52f8675c3f12\" (UID: \"81989aca-451d-4a70-b683-52f8675c3f12\") " Dec 05 19:27:15 crc kubenswrapper[4828]: I1205 19:27:15.915474 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb7bb\" (UniqueName: \"kubernetes.io/projected/81989aca-451d-4a70-b683-52f8675c3f12-kube-api-access-fb7bb\") pod \"81989aca-451d-4a70-b683-52f8675c3f12\" (UID: \"81989aca-451d-4a70-b683-52f8675c3f12\") " Dec 05 19:27:15 crc kubenswrapper[4828]: I1205 19:27:15.915545 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81989aca-451d-4a70-b683-52f8675c3f12-utilities\") pod \"81989aca-451d-4a70-b683-52f8675c3f12\" (UID: \"81989aca-451d-4a70-b683-52f8675c3f12\") " Dec 05 19:27:15 crc kubenswrapper[4828]: I1205 19:27:15.916366 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81989aca-451d-4a70-b683-52f8675c3f12-utilities" (OuterVolumeSpecName: "utilities") pod "81989aca-451d-4a70-b683-52f8675c3f12" (UID: "81989aca-451d-4a70-b683-52f8675c3f12"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:27:15 crc kubenswrapper[4828]: I1205 19:27:15.922517 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81989aca-451d-4a70-b683-52f8675c3f12-kube-api-access-fb7bb" (OuterVolumeSpecName: "kube-api-access-fb7bb") pod "81989aca-451d-4a70-b683-52f8675c3f12" (UID: "81989aca-451d-4a70-b683-52f8675c3f12"). InnerVolumeSpecName "kube-api-access-fb7bb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:27:16 crc kubenswrapper[4828]: I1205 19:27:16.017788 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fb7bb\" (UniqueName: \"kubernetes.io/projected/81989aca-451d-4a70-b683-52f8675c3f12-kube-api-access-fb7bb\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:16 crc kubenswrapper[4828]: I1205 19:27:16.017836 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81989aca-451d-4a70-b683-52f8675c3f12-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:16 crc kubenswrapper[4828]: I1205 19:27:16.030983 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81989aca-451d-4a70-b683-52f8675c3f12-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81989aca-451d-4a70-b683-52f8675c3f12" (UID: "81989aca-451d-4a70-b683-52f8675c3f12"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:27:16 crc kubenswrapper[4828]: I1205 19:27:16.120310 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81989aca-451d-4a70-b683-52f8675c3f12-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:16 crc kubenswrapper[4828]: I1205 19:27:16.348316 4828 generic.go:334] "Generic (PLEG): container finished" podID="81989aca-451d-4a70-b683-52f8675c3f12" containerID="4263a313afd2d77ff64ed672d35424aa179a2fb0fa475e0543fb5c325320fbc8" exitCode=0 Dec 05 19:27:16 crc kubenswrapper[4828]: I1205 19:27:16.348351 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47hsp" event={"ID":"81989aca-451d-4a70-b683-52f8675c3f12","Type":"ContainerDied","Data":"4263a313afd2d77ff64ed672d35424aa179a2fb0fa475e0543fb5c325320fbc8"} Dec 05 19:27:16 crc kubenswrapper[4828]: I1205 19:27:16.348378 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47hsp" event={"ID":"81989aca-451d-4a70-b683-52f8675c3f12","Type":"ContainerDied","Data":"9b81541b062e2b95eaf0ba37bbd3381dc47463fc8dc39f6ff14dd4c8e1b2ff07"} Dec 05 19:27:16 crc kubenswrapper[4828]: I1205 19:27:16.348395 4828 scope.go:117] "RemoveContainer" containerID="4263a313afd2d77ff64ed672d35424aa179a2fb0fa475e0543fb5c325320fbc8" Dec 05 19:27:16 crc kubenswrapper[4828]: I1205 19:27:16.348398 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-47hsp" Dec 05 19:27:16 crc kubenswrapper[4828]: I1205 19:27:16.369108 4828 scope.go:117] "RemoveContainer" containerID="70210a304a57aeaf56fff5eb45d7d190ce35c6a6807e910468d9b5fb664d6ac7" Dec 05 19:27:16 crc kubenswrapper[4828]: I1205 19:27:16.384292 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-47hsp"] Dec 05 19:27:16 crc kubenswrapper[4828]: I1205 19:27:16.400379 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-47hsp"] Dec 05 19:27:16 crc kubenswrapper[4828]: I1205 19:27:16.401205 4828 scope.go:117] "RemoveContainer" containerID="6f337fd65daa11bea0549196b6aaddfa8a05795d2385239d56db2928d673f3bd" Dec 05 19:27:16 crc kubenswrapper[4828]: I1205 19:27:16.448973 4828 scope.go:117] "RemoveContainer" containerID="4263a313afd2d77ff64ed672d35424aa179a2fb0fa475e0543fb5c325320fbc8" Dec 05 19:27:16 crc kubenswrapper[4828]: E1205 19:27:16.449448 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4263a313afd2d77ff64ed672d35424aa179a2fb0fa475e0543fb5c325320fbc8\": container with ID starting with 4263a313afd2d77ff64ed672d35424aa179a2fb0fa475e0543fb5c325320fbc8 not found: ID does not exist" containerID="4263a313afd2d77ff64ed672d35424aa179a2fb0fa475e0543fb5c325320fbc8" Dec 05 19:27:16 crc kubenswrapper[4828]: I1205 19:27:16.449497 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4263a313afd2d77ff64ed672d35424aa179a2fb0fa475e0543fb5c325320fbc8"} err="failed to get container status \"4263a313afd2d77ff64ed672d35424aa179a2fb0fa475e0543fb5c325320fbc8\": rpc error: code = NotFound desc = could not find container \"4263a313afd2d77ff64ed672d35424aa179a2fb0fa475e0543fb5c325320fbc8\": container with ID starting with 4263a313afd2d77ff64ed672d35424aa179a2fb0fa475e0543fb5c325320fbc8 not found: ID does not exist" Dec 05 19:27:16 crc kubenswrapper[4828]: I1205 19:27:16.449525 4828 scope.go:117] "RemoveContainer" containerID="70210a304a57aeaf56fff5eb45d7d190ce35c6a6807e910468d9b5fb664d6ac7" Dec 05 19:27:16 crc kubenswrapper[4828]: E1205 19:27:16.449867 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70210a304a57aeaf56fff5eb45d7d190ce35c6a6807e910468d9b5fb664d6ac7\": container with ID starting with 70210a304a57aeaf56fff5eb45d7d190ce35c6a6807e910468d9b5fb664d6ac7 not found: ID does not exist" containerID="70210a304a57aeaf56fff5eb45d7d190ce35c6a6807e910468d9b5fb664d6ac7" Dec 05 19:27:16 crc kubenswrapper[4828]: I1205 19:27:16.449897 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70210a304a57aeaf56fff5eb45d7d190ce35c6a6807e910468d9b5fb664d6ac7"} err="failed to get container status \"70210a304a57aeaf56fff5eb45d7d190ce35c6a6807e910468d9b5fb664d6ac7\": rpc error: code = NotFound desc = could not find container \"70210a304a57aeaf56fff5eb45d7d190ce35c6a6807e910468d9b5fb664d6ac7\": container with ID starting with 70210a304a57aeaf56fff5eb45d7d190ce35c6a6807e910468d9b5fb664d6ac7 not found: ID does not exist" Dec 05 19:27:16 crc kubenswrapper[4828]: I1205 19:27:16.449917 4828 scope.go:117] "RemoveContainer" containerID="6f337fd65daa11bea0549196b6aaddfa8a05795d2385239d56db2928d673f3bd" Dec 05 19:27:16 crc kubenswrapper[4828]: E1205 19:27:16.450212 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f337fd65daa11bea0549196b6aaddfa8a05795d2385239d56db2928d673f3bd\": container with ID starting with 6f337fd65daa11bea0549196b6aaddfa8a05795d2385239d56db2928d673f3bd not found: ID does not exist" containerID="6f337fd65daa11bea0549196b6aaddfa8a05795d2385239d56db2928d673f3bd" Dec 05 19:27:16 crc kubenswrapper[4828]: I1205 19:27:16.450258 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f337fd65daa11bea0549196b6aaddfa8a05795d2385239d56db2928d673f3bd"} err="failed to get container status \"6f337fd65daa11bea0549196b6aaddfa8a05795d2385239d56db2928d673f3bd\": rpc error: code = NotFound desc = could not find container \"6f337fd65daa11bea0549196b6aaddfa8a05795d2385239d56db2928d673f3bd\": container with ID starting with 6f337fd65daa11bea0549196b6aaddfa8a05795d2385239d56db2928d673f3bd not found: ID does not exist" Dec 05 19:27:16 crc kubenswrapper[4828]: I1205 19:27:16.457227 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81989aca-451d-4a70-b683-52f8675c3f12" path="/var/lib/kubelet/pods/81989aca-451d-4a70-b683-52f8675c3f12/volumes" Dec 05 19:27:16 crc kubenswrapper[4828]: I1205 19:27:16.908778 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 19:27:16 crc kubenswrapper[4828]: I1205 19:27:16.908846 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 19:27:17 crc kubenswrapper[4828]: I1205 19:27:17.735108 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 05 19:27:17 crc kubenswrapper[4828]: I1205 19:27:17.773668 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 05 19:27:17 crc kubenswrapper[4828]: I1205 19:27:17.991030 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4e71cb60-681c-465a-b651-1a5cd566baa9" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.192:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 19:27:17 crc kubenswrapper[4828]: I1205 19:27:17.991070 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4e71cb60-681c-465a-b651-1a5cd566baa9" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.192:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 19:27:18 crc kubenswrapper[4828]: I1205 19:27:18.427522 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 05 19:27:18 crc kubenswrapper[4828]: I1205 19:27:18.752908 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 05 19:27:18 crc kubenswrapper[4828]: I1205 19:27:18.753908 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 05 19:27:19 crc kubenswrapper[4828]: I1205 19:27:19.720743 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 05 19:27:19 crc kubenswrapper[4828]: I1205 19:27:19.767064 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="12374de6-1d67-43ff-8067-319d86b0fe6b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 19:27:19 crc kubenswrapper[4828]: I1205 19:27:19.767577 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="12374de6-1d67-43ff-8067-319d86b0fe6b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 19:27:26 crc kubenswrapper[4828]: I1205 19:27:26.912352 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 05 19:27:26 crc kubenswrapper[4828]: I1205 19:27:26.913765 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 05 19:27:26 crc kubenswrapper[4828]: I1205 19:27:26.916186 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 05 19:27:26 crc kubenswrapper[4828]: I1205 19:27:26.918393 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 05 19:27:27 crc kubenswrapper[4828]: I1205 19:27:27.487496 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 05 19:27:27 crc kubenswrapper[4828]: I1205 19:27:27.491418 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 05 19:27:27 crc kubenswrapper[4828]: I1205 19:27:27.644599 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-mhhlc"] Dec 05 19:27:27 crc kubenswrapper[4828]: E1205 19:27:27.645002 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81989aca-451d-4a70-b683-52f8675c3f12" containerName="registry-server" Dec 05 19:27:27 crc kubenswrapper[4828]: I1205 19:27:27.645014 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="81989aca-451d-4a70-b683-52f8675c3f12" containerName="registry-server" Dec 05 19:27:27 crc kubenswrapper[4828]: E1205 19:27:27.645023 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81989aca-451d-4a70-b683-52f8675c3f12" containerName="extract-utilities" Dec 05 19:27:27 crc kubenswrapper[4828]: I1205 19:27:27.645030 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="81989aca-451d-4a70-b683-52f8675c3f12" containerName="extract-utilities" Dec 05 19:27:27 crc kubenswrapper[4828]: E1205 19:27:27.645058 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81989aca-451d-4a70-b683-52f8675c3f12" containerName="extract-content" Dec 05 19:27:27 crc kubenswrapper[4828]: I1205 19:27:27.645064 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="81989aca-451d-4a70-b683-52f8675c3f12" containerName="extract-content" Dec 05 19:27:27 crc kubenswrapper[4828]: I1205 19:27:27.645255 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="81989aca-451d-4a70-b683-52f8675c3f12" containerName="registry-server" Dec 05 19:27:27 crc kubenswrapper[4828]: I1205 19:27:27.646396 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" Dec 05 19:27:27 crc kubenswrapper[4828]: I1205 19:27:27.682128 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-mhhlc"] Dec 05 19:27:27 crc kubenswrapper[4828]: I1205 19:27:27.740313 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d80a607-092d-41ff-bc3c-8c8bc08fa239-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-mhhlc\" (UID: \"1d80a607-092d-41ff-bc3c-8c8bc08fa239\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" Dec 05 19:27:27 crc kubenswrapper[4828]: I1205 19:27:27.740366 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flt8b\" (UniqueName: \"kubernetes.io/projected/1d80a607-092d-41ff-bc3c-8c8bc08fa239-kube-api-access-flt8b\") pod \"dnsmasq-dns-89c5cd4d5-mhhlc\" (UID: \"1d80a607-092d-41ff-bc3c-8c8bc08fa239\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" Dec 05 19:27:27 crc kubenswrapper[4828]: I1205 19:27:27.740403 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d80a607-092d-41ff-bc3c-8c8bc08fa239-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-mhhlc\" (UID: \"1d80a607-092d-41ff-bc3c-8c8bc08fa239\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" Dec 05 19:27:27 crc kubenswrapper[4828]: I1205 19:27:27.740469 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d80a607-092d-41ff-bc3c-8c8bc08fa239-config\") pod \"dnsmasq-dns-89c5cd4d5-mhhlc\" (UID: \"1d80a607-092d-41ff-bc3c-8c8bc08fa239\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" Dec 05 19:27:27 crc kubenswrapper[4828]: I1205 19:27:27.740540 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d80a607-092d-41ff-bc3c-8c8bc08fa239-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-mhhlc\" (UID: \"1d80a607-092d-41ff-bc3c-8c8bc08fa239\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" Dec 05 19:27:27 crc kubenswrapper[4828]: I1205 19:27:27.740622 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d80a607-092d-41ff-bc3c-8c8bc08fa239-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-mhhlc\" (UID: \"1d80a607-092d-41ff-bc3c-8c8bc08fa239\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" Dec 05 19:27:27 crc kubenswrapper[4828]: I1205 19:27:27.841717 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d80a607-092d-41ff-bc3c-8c8bc08fa239-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-mhhlc\" (UID: \"1d80a607-092d-41ff-bc3c-8c8bc08fa239\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" Dec 05 19:27:27 crc kubenswrapper[4828]: I1205 19:27:27.841757 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flt8b\" (UniqueName: \"kubernetes.io/projected/1d80a607-092d-41ff-bc3c-8c8bc08fa239-kube-api-access-flt8b\") pod \"dnsmasq-dns-89c5cd4d5-mhhlc\" (UID: \"1d80a607-092d-41ff-bc3c-8c8bc08fa239\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" Dec 05 19:27:27 crc kubenswrapper[4828]: I1205 19:27:27.841784 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d80a607-092d-41ff-bc3c-8c8bc08fa239-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-mhhlc\" (UID: \"1d80a607-092d-41ff-bc3c-8c8bc08fa239\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" Dec 05 19:27:27 crc kubenswrapper[4828]: I1205 19:27:27.841819 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d80a607-092d-41ff-bc3c-8c8bc08fa239-config\") pod \"dnsmasq-dns-89c5cd4d5-mhhlc\" (UID: \"1d80a607-092d-41ff-bc3c-8c8bc08fa239\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" Dec 05 19:27:27 crc kubenswrapper[4828]: I1205 19:27:27.841872 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d80a607-092d-41ff-bc3c-8c8bc08fa239-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-mhhlc\" (UID: \"1d80a607-092d-41ff-bc3c-8c8bc08fa239\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" Dec 05 19:27:27 crc kubenswrapper[4828]: I1205 19:27:27.841916 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d80a607-092d-41ff-bc3c-8c8bc08fa239-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-mhhlc\" (UID: \"1d80a607-092d-41ff-bc3c-8c8bc08fa239\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" Dec 05 19:27:27 crc kubenswrapper[4828]: I1205 19:27:27.843066 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d80a607-092d-41ff-bc3c-8c8bc08fa239-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-mhhlc\" (UID: \"1d80a607-092d-41ff-bc3c-8c8bc08fa239\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" Dec 05 19:27:27 crc kubenswrapper[4828]: I1205 19:27:27.843101 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d80a607-092d-41ff-bc3c-8c8bc08fa239-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-mhhlc\" (UID: \"1d80a607-092d-41ff-bc3c-8c8bc08fa239\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" Dec 05 19:27:27 crc kubenswrapper[4828]: I1205 19:27:27.843334 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d80a607-092d-41ff-bc3c-8c8bc08fa239-config\") pod \"dnsmasq-dns-89c5cd4d5-mhhlc\" (UID: \"1d80a607-092d-41ff-bc3c-8c8bc08fa239\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" Dec 05 19:27:27 crc kubenswrapper[4828]: I1205 19:27:27.843770 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d80a607-092d-41ff-bc3c-8c8bc08fa239-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-mhhlc\" (UID: \"1d80a607-092d-41ff-bc3c-8c8bc08fa239\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" Dec 05 19:27:27 crc kubenswrapper[4828]: I1205 19:27:27.859075 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d80a607-092d-41ff-bc3c-8c8bc08fa239-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-mhhlc\" (UID: \"1d80a607-092d-41ff-bc3c-8c8bc08fa239\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" Dec 05 19:27:27 crc kubenswrapper[4828]: I1205 19:27:27.879089 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flt8b\" (UniqueName: \"kubernetes.io/projected/1d80a607-092d-41ff-bc3c-8c8bc08fa239-kube-api-access-flt8b\") pod \"dnsmasq-dns-89c5cd4d5-mhhlc\" (UID: \"1d80a607-092d-41ff-bc3c-8c8bc08fa239\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" Dec 05 19:27:27 crc kubenswrapper[4828]: I1205 19:27:27.967068 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" Dec 05 19:27:28 crc kubenswrapper[4828]: I1205 19:27:28.457903 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-mhhlc"] Dec 05 19:27:28 crc kubenswrapper[4828]: I1205 19:27:28.498168 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" event={"ID":"1d80a607-092d-41ff-bc3c-8c8bc08fa239","Type":"ContainerStarted","Data":"b89734fda1eff7b794039404206c0b388a45441ac7de0359a2afde454e34136e"} Dec 05 19:27:28 crc kubenswrapper[4828]: I1205 19:27:28.758266 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 05 19:27:28 crc kubenswrapper[4828]: I1205 19:27:28.760672 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 05 19:27:28 crc kubenswrapper[4828]: I1205 19:27:28.764269 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 05 19:27:29 crc kubenswrapper[4828]: I1205 19:27:29.508226 4828 generic.go:334] "Generic (PLEG): container finished" podID="1d80a607-092d-41ff-bc3c-8c8bc08fa239" containerID="ff5c42483254c42ebecdefe1784dcabaaf8d24925c7b380d43c4dcf5e083c1b3" exitCode=0 Dec 05 19:27:29 crc kubenswrapper[4828]: I1205 19:27:29.508297 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" event={"ID":"1d80a607-092d-41ff-bc3c-8c8bc08fa239","Type":"ContainerDied","Data":"ff5c42483254c42ebecdefe1784dcabaaf8d24925c7b380d43c4dcf5e083c1b3"} Dec 05 19:27:29 crc kubenswrapper[4828]: I1205 19:27:29.531308 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 05 19:27:29 crc kubenswrapper[4828]: I1205 19:27:29.848453 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:27:29 crc kubenswrapper[4828]: I1205 19:27:29.848958 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="092a7272-3983-4d2e-a1de-7ef49e53c165" containerName="ceilometer-central-agent" containerID="cri-o://085bc3eb0c07c1c738dacb7d98d0049633936110afe5b6ed0b26f0b1f0d32e1d" gracePeriod=30 Dec 05 19:27:29 crc kubenswrapper[4828]: I1205 19:27:29.849050 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="092a7272-3983-4d2e-a1de-7ef49e53c165" containerName="sg-core" containerID="cri-o://c7b63794b0911ab0c47f0ff4fb06045b02b85b6bc09cbcb52550a459ca6411f2" gracePeriod=30 Dec 05 19:27:29 crc kubenswrapper[4828]: I1205 19:27:29.849128 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="092a7272-3983-4d2e-a1de-7ef49e53c165" containerName="proxy-httpd" containerID="cri-o://8f9297e5d9a74d260d74b306d7d27665a236c2c519f1a4b7c32f9f69969620b4" gracePeriod=30 Dec 05 19:27:29 crc kubenswrapper[4828]: I1205 19:27:29.849250 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="092a7272-3983-4d2e-a1de-7ef49e53c165" containerName="ceilometer-notification-agent" containerID="cri-o://2098afe60574f01b31be041fde941da08152f3ec1ef4cb9637f83db85e8e80ba" gracePeriod=30 Dec 05 19:27:30 crc kubenswrapper[4828]: I1205 19:27:30.278418 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 19:27:30 crc kubenswrapper[4828]: I1205 19:27:30.521553 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" event={"ID":"1d80a607-092d-41ff-bc3c-8c8bc08fa239","Type":"ContainerStarted","Data":"4580f1c0ebf65367e71236b0d234b18d0dc5aad8e806e3b80749859ee8b32b2d"} Dec 05 19:27:30 crc kubenswrapper[4828]: I1205 19:27:30.521675 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" Dec 05 19:27:30 crc kubenswrapper[4828]: I1205 19:27:30.527170 4828 generic.go:334] "Generic (PLEG): container finished" podID="092a7272-3983-4d2e-a1de-7ef49e53c165" containerID="8f9297e5d9a74d260d74b306d7d27665a236c2c519f1a4b7c32f9f69969620b4" exitCode=0 Dec 05 19:27:30 crc kubenswrapper[4828]: I1205 19:27:30.527209 4828 generic.go:334] "Generic (PLEG): container finished" podID="092a7272-3983-4d2e-a1de-7ef49e53c165" containerID="c7b63794b0911ab0c47f0ff4fb06045b02b85b6bc09cbcb52550a459ca6411f2" exitCode=2 Dec 05 19:27:30 crc kubenswrapper[4828]: I1205 19:27:30.527222 4828 generic.go:334] "Generic (PLEG): container finished" podID="092a7272-3983-4d2e-a1de-7ef49e53c165" containerID="085bc3eb0c07c1c738dacb7d98d0049633936110afe5b6ed0b26f0b1f0d32e1d" exitCode=0 Dec 05 19:27:30 crc kubenswrapper[4828]: I1205 19:27:30.527509 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"092a7272-3983-4d2e-a1de-7ef49e53c165","Type":"ContainerDied","Data":"8f9297e5d9a74d260d74b306d7d27665a236c2c519f1a4b7c32f9f69969620b4"} Dec 05 19:27:30 crc kubenswrapper[4828]: I1205 19:27:30.527540 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"092a7272-3983-4d2e-a1de-7ef49e53c165","Type":"ContainerDied","Data":"c7b63794b0911ab0c47f0ff4fb06045b02b85b6bc09cbcb52550a459ca6411f2"} Dec 05 19:27:30 crc kubenswrapper[4828]: I1205 19:27:30.527553 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"092a7272-3983-4d2e-a1de-7ef49e53c165","Type":"ContainerDied","Data":"085bc3eb0c07c1c738dacb7d98d0049633936110afe5b6ed0b26f0b1f0d32e1d"} Dec 05 19:27:30 crc kubenswrapper[4828]: I1205 19:27:30.527722 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4e71cb60-681c-465a-b651-1a5cd566baa9" containerName="nova-api-log" containerID="cri-o://1f03f8da067f45a2544b8279f42952f24810a5f9e05414d6d126bee9f451a6a7" gracePeriod=30 Dec 05 19:27:30 crc kubenswrapper[4828]: I1205 19:27:30.527860 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4e71cb60-681c-465a-b651-1a5cd566baa9" containerName="nova-api-api" containerID="cri-o://c501c424d717f67c2b3e662c00f5ed49e7f42a3095552e47de519ac3935c8559" gracePeriod=30 Dec 05 19:27:30 crc kubenswrapper[4828]: I1205 19:27:30.543481 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" podStartSLOduration=3.543459962 podStartE2EDuration="3.543459962s" podCreationTimestamp="2025-12-05 19:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:27:30.541410248 +0000 UTC m=+1428.436632554" watchObservedRunningTime="2025-12-05 19:27:30.543459962 +0000 UTC m=+1428.438682268" Dec 05 19:27:31 crc kubenswrapper[4828]: I1205 19:27:31.536860 4828 generic.go:334] "Generic (PLEG): container finished" podID="4e71cb60-681c-465a-b651-1a5cd566baa9" containerID="1f03f8da067f45a2544b8279f42952f24810a5f9e05414d6d126bee9f451a6a7" exitCode=143 Dec 05 19:27:31 crc kubenswrapper[4828]: I1205 19:27:31.537029 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4e71cb60-681c-465a-b651-1a5cd566baa9","Type":"ContainerDied","Data":"1f03f8da067f45a2544b8279f42952f24810a5f9e05414d6d126bee9f451a6a7"} Dec 05 19:27:32 crc kubenswrapper[4828]: I1205 19:27:32.550402 4828 generic.go:334] "Generic (PLEG): container finished" podID="092a7272-3983-4d2e-a1de-7ef49e53c165" containerID="2098afe60574f01b31be041fde941da08152f3ec1ef4cb9637f83db85e8e80ba" exitCode=0 Dec 05 19:27:32 crc kubenswrapper[4828]: I1205 19:27:32.550477 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"092a7272-3983-4d2e-a1de-7ef49e53c165","Type":"ContainerDied","Data":"2098afe60574f01b31be041fde941da08152f3ec1ef4cb9637f83db85e8e80ba"} Dec 05 19:27:32 crc kubenswrapper[4828]: I1205 19:27:32.665632 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 19:27:32 crc kubenswrapper[4828]: I1205 19:27:32.745742 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/092a7272-3983-4d2e-a1de-7ef49e53c165-run-httpd\") pod \"092a7272-3983-4d2e-a1de-7ef49e53c165\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " Dec 05 19:27:32 crc kubenswrapper[4828]: I1205 19:27:32.745800 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/092a7272-3983-4d2e-a1de-7ef49e53c165-log-httpd\") pod \"092a7272-3983-4d2e-a1de-7ef49e53c165\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " Dec 05 19:27:32 crc kubenswrapper[4828]: I1205 19:27:32.745834 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/092a7272-3983-4d2e-a1de-7ef49e53c165-ceilometer-tls-certs\") pod \"092a7272-3983-4d2e-a1de-7ef49e53c165\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " Dec 05 19:27:32 crc kubenswrapper[4828]: I1205 19:27:32.745884 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z776j\" (UniqueName: \"kubernetes.io/projected/092a7272-3983-4d2e-a1de-7ef49e53c165-kube-api-access-z776j\") pod \"092a7272-3983-4d2e-a1de-7ef49e53c165\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " Dec 05 19:27:32 crc kubenswrapper[4828]: I1205 19:27:32.745914 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092a7272-3983-4d2e-a1de-7ef49e53c165-combined-ca-bundle\") pod \"092a7272-3983-4d2e-a1de-7ef49e53c165\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " Dec 05 19:27:32 crc kubenswrapper[4828]: I1205 19:27:32.746026 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/092a7272-3983-4d2e-a1de-7ef49e53c165-sg-core-conf-yaml\") pod \"092a7272-3983-4d2e-a1de-7ef49e53c165\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " Dec 05 19:27:32 crc kubenswrapper[4828]: I1205 19:27:32.746044 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/092a7272-3983-4d2e-a1de-7ef49e53c165-scripts\") pod \"092a7272-3983-4d2e-a1de-7ef49e53c165\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " Dec 05 19:27:32 crc kubenswrapper[4828]: I1205 19:27:32.746080 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/092a7272-3983-4d2e-a1de-7ef49e53c165-config-data\") pod \"092a7272-3983-4d2e-a1de-7ef49e53c165\" (UID: \"092a7272-3983-4d2e-a1de-7ef49e53c165\") " Dec 05 19:27:32 crc kubenswrapper[4828]: I1205 19:27:32.746492 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/092a7272-3983-4d2e-a1de-7ef49e53c165-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "092a7272-3983-4d2e-a1de-7ef49e53c165" (UID: "092a7272-3983-4d2e-a1de-7ef49e53c165"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:27:32 crc kubenswrapper[4828]: I1205 19:27:32.746733 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/092a7272-3983-4d2e-a1de-7ef49e53c165-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "092a7272-3983-4d2e-a1de-7ef49e53c165" (UID: "092a7272-3983-4d2e-a1de-7ef49e53c165"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:27:32 crc kubenswrapper[4828]: I1205 19:27:32.751981 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/092a7272-3983-4d2e-a1de-7ef49e53c165-kube-api-access-z776j" (OuterVolumeSpecName: "kube-api-access-z776j") pod "092a7272-3983-4d2e-a1de-7ef49e53c165" (UID: "092a7272-3983-4d2e-a1de-7ef49e53c165"). InnerVolumeSpecName "kube-api-access-z776j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:27:32 crc kubenswrapper[4828]: I1205 19:27:32.770489 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/092a7272-3983-4d2e-a1de-7ef49e53c165-scripts" (OuterVolumeSpecName: "scripts") pod "092a7272-3983-4d2e-a1de-7ef49e53c165" (UID: "092a7272-3983-4d2e-a1de-7ef49e53c165"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:32 crc kubenswrapper[4828]: I1205 19:27:32.787588 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/092a7272-3983-4d2e-a1de-7ef49e53c165-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "092a7272-3983-4d2e-a1de-7ef49e53c165" (UID: "092a7272-3983-4d2e-a1de-7ef49e53c165"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:32 crc kubenswrapper[4828]: I1205 19:27:32.799622 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/092a7272-3983-4d2e-a1de-7ef49e53c165-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "092a7272-3983-4d2e-a1de-7ef49e53c165" (UID: "092a7272-3983-4d2e-a1de-7ef49e53c165"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:32 crc kubenswrapper[4828]: I1205 19:27:32.830661 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/092a7272-3983-4d2e-a1de-7ef49e53c165-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "092a7272-3983-4d2e-a1de-7ef49e53c165" (UID: "092a7272-3983-4d2e-a1de-7ef49e53c165"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:32 crc kubenswrapper[4828]: I1205 19:27:32.839886 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/092a7272-3983-4d2e-a1de-7ef49e53c165-config-data" (OuterVolumeSpecName: "config-data") pod "092a7272-3983-4d2e-a1de-7ef49e53c165" (UID: "092a7272-3983-4d2e-a1de-7ef49e53c165"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:32 crc kubenswrapper[4828]: I1205 19:27:32.847737 4828 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/092a7272-3983-4d2e-a1de-7ef49e53c165-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:32 crc kubenswrapper[4828]: I1205 19:27:32.847768 4828 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/092a7272-3983-4d2e-a1de-7ef49e53c165-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:32 crc kubenswrapper[4828]: I1205 19:27:32.847777 4828 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/092a7272-3983-4d2e-a1de-7ef49e53c165-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:32 crc kubenswrapper[4828]: I1205 19:27:32.847787 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z776j\" (UniqueName: \"kubernetes.io/projected/092a7272-3983-4d2e-a1de-7ef49e53c165-kube-api-access-z776j\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:32 crc kubenswrapper[4828]: I1205 19:27:32.847796 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092a7272-3983-4d2e-a1de-7ef49e53c165-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:32 crc kubenswrapper[4828]: I1205 19:27:32.847806 4828 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/092a7272-3983-4d2e-a1de-7ef49e53c165-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:32 crc kubenswrapper[4828]: I1205 19:27:32.847814 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/092a7272-3983-4d2e-a1de-7ef49e53c165-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:32 crc kubenswrapper[4828]: I1205 19:27:32.847848 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/092a7272-3983-4d2e-a1de-7ef49e53c165-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.562800 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"092a7272-3983-4d2e-a1de-7ef49e53c165","Type":"ContainerDied","Data":"1bf1eaafe6f0b112af7aba5cd6f4fda3c80955f93675b6a4c74a51f9f53de022"} Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.563069 4828 scope.go:117] "RemoveContainer" containerID="8f9297e5d9a74d260d74b306d7d27665a236c2c519f1a4b7c32f9f69969620b4" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.563151 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.565922 4828 generic.go:334] "Generic (PLEG): container finished" podID="078b2cf8-a9b3-422f-b0e9-2d60586d9062" containerID="35e579cb56f91cd1eff276f8271f0f95860e777fbd8d75f885d1097a5f71078b" exitCode=137 Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.565969 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"078b2cf8-a9b3-422f-b0e9-2d60586d9062","Type":"ContainerDied","Data":"35e579cb56f91cd1eff276f8271f0f95860e777fbd8d75f885d1097a5f71078b"} Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.565999 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"078b2cf8-a9b3-422f-b0e9-2d60586d9062","Type":"ContainerDied","Data":"acc350bc109ef3dc30840c60710ab7d7d95658dbf0b830383b50ccbfdbb58414"} Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.566011 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acc350bc109ef3dc30840c60710ab7d7d95658dbf0b830383b50ccbfdbb58414" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.592451 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.610977 4828 scope.go:117] "RemoveContainer" containerID="c7b63794b0911ab0c47f0ff4fb06045b02b85b6bc09cbcb52550a459ca6411f2" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.611778 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.665862 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.674758 4828 scope.go:117] "RemoveContainer" containerID="2098afe60574f01b31be041fde941da08152f3ec1ef4cb9637f83db85e8e80ba" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.683921 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:27:33 crc kubenswrapper[4828]: E1205 19:27:33.684388 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="092a7272-3983-4d2e-a1de-7ef49e53c165" containerName="sg-core" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.684406 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="092a7272-3983-4d2e-a1de-7ef49e53c165" containerName="sg-core" Dec 05 19:27:33 crc kubenswrapper[4828]: E1205 19:27:33.684420 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="092a7272-3983-4d2e-a1de-7ef49e53c165" containerName="ceilometer-central-agent" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.684427 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="092a7272-3983-4d2e-a1de-7ef49e53c165" containerName="ceilometer-central-agent" Dec 05 19:27:33 crc kubenswrapper[4828]: E1205 19:27:33.684451 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="092a7272-3983-4d2e-a1de-7ef49e53c165" containerName="proxy-httpd" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.684458 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="092a7272-3983-4d2e-a1de-7ef49e53c165" containerName="proxy-httpd" Dec 05 19:27:33 crc kubenswrapper[4828]: E1205 19:27:33.684464 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="092a7272-3983-4d2e-a1de-7ef49e53c165" containerName="ceilometer-notification-agent" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.684470 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="092a7272-3983-4d2e-a1de-7ef49e53c165" containerName="ceilometer-notification-agent" Dec 05 19:27:33 crc kubenswrapper[4828]: E1205 19:27:33.684484 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="078b2cf8-a9b3-422f-b0e9-2d60586d9062" containerName="nova-cell1-novncproxy-novncproxy" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.684489 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="078b2cf8-a9b3-422f-b0e9-2d60586d9062" containerName="nova-cell1-novncproxy-novncproxy" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.684699 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="078b2cf8-a9b3-422f-b0e9-2d60586d9062" containerName="nova-cell1-novncproxy-novncproxy" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.684714 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="092a7272-3983-4d2e-a1de-7ef49e53c165" containerName="ceilometer-notification-agent" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.684733 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="092a7272-3983-4d2e-a1de-7ef49e53c165" containerName="sg-core" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.684746 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="092a7272-3983-4d2e-a1de-7ef49e53c165" containerName="ceilometer-central-agent" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.684752 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="092a7272-3983-4d2e-a1de-7ef49e53c165" containerName="proxy-httpd" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.686445 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.688623 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.688898 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.689430 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.700130 4828 scope.go:117] "RemoveContainer" containerID="085bc3eb0c07c1c738dacb7d98d0049633936110afe5b6ed0b26f0b1f0d32e1d" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.701060 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.763649 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7p8q\" (UniqueName: \"kubernetes.io/projected/078b2cf8-a9b3-422f-b0e9-2d60586d9062-kube-api-access-t7p8q\") pod \"078b2cf8-a9b3-422f-b0e9-2d60586d9062\" (UID: \"078b2cf8-a9b3-422f-b0e9-2d60586d9062\") " Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.763730 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078b2cf8-a9b3-422f-b0e9-2d60586d9062-combined-ca-bundle\") pod \"078b2cf8-a9b3-422f-b0e9-2d60586d9062\" (UID: \"078b2cf8-a9b3-422f-b0e9-2d60586d9062\") " Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.763869 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/078b2cf8-a9b3-422f-b0e9-2d60586d9062-config-data\") pod \"078b2cf8-a9b3-422f-b0e9-2d60586d9062\" (UID: \"078b2cf8-a9b3-422f-b0e9-2d60586d9062\") " Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.769800 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/078b2cf8-a9b3-422f-b0e9-2d60586d9062-kube-api-access-t7p8q" (OuterVolumeSpecName: "kube-api-access-t7p8q") pod "078b2cf8-a9b3-422f-b0e9-2d60586d9062" (UID: "078b2cf8-a9b3-422f-b0e9-2d60586d9062"). InnerVolumeSpecName "kube-api-access-t7p8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.794599 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/078b2cf8-a9b3-422f-b0e9-2d60586d9062-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "078b2cf8-a9b3-422f-b0e9-2d60586d9062" (UID: "078b2cf8-a9b3-422f-b0e9-2d60586d9062"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.794858 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/078b2cf8-a9b3-422f-b0e9-2d60586d9062-config-data" (OuterVolumeSpecName: "config-data") pod "078b2cf8-a9b3-422f-b0e9-2d60586d9062" (UID: "078b2cf8-a9b3-422f-b0e9-2d60586d9062"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.866918 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1984b123-aa0e-4af4-a396-76c783a22b45-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1984b123-aa0e-4af4-a396-76c783a22b45\") " pod="openstack/ceilometer-0" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.866980 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1984b123-aa0e-4af4-a396-76c783a22b45-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1984b123-aa0e-4af4-a396-76c783a22b45\") " pod="openstack/ceilometer-0" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.867029 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1984b123-aa0e-4af4-a396-76c783a22b45-run-httpd\") pod \"ceilometer-0\" (UID: \"1984b123-aa0e-4af4-a396-76c783a22b45\") " pod="openstack/ceilometer-0" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.867057 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1984b123-aa0e-4af4-a396-76c783a22b45-config-data\") pod \"ceilometer-0\" (UID: \"1984b123-aa0e-4af4-a396-76c783a22b45\") " pod="openstack/ceilometer-0" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.867139 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1984b123-aa0e-4af4-a396-76c783a22b45-scripts\") pod \"ceilometer-0\" (UID: \"1984b123-aa0e-4af4-a396-76c783a22b45\") " pod="openstack/ceilometer-0" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.867210 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpr9b\" (UniqueName: \"kubernetes.io/projected/1984b123-aa0e-4af4-a396-76c783a22b45-kube-api-access-xpr9b\") pod \"ceilometer-0\" (UID: \"1984b123-aa0e-4af4-a396-76c783a22b45\") " pod="openstack/ceilometer-0" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.867243 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1984b123-aa0e-4af4-a396-76c783a22b45-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1984b123-aa0e-4af4-a396-76c783a22b45\") " pod="openstack/ceilometer-0" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.867272 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1984b123-aa0e-4af4-a396-76c783a22b45-log-httpd\") pod \"ceilometer-0\" (UID: \"1984b123-aa0e-4af4-a396-76c783a22b45\") " pod="openstack/ceilometer-0" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.867409 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/078b2cf8-a9b3-422f-b0e9-2d60586d9062-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.867440 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7p8q\" (UniqueName: \"kubernetes.io/projected/078b2cf8-a9b3-422f-b0e9-2d60586d9062-kube-api-access-t7p8q\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.867452 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078b2cf8-a9b3-422f-b0e9-2d60586d9062-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.989223 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpr9b\" (UniqueName: \"kubernetes.io/projected/1984b123-aa0e-4af4-a396-76c783a22b45-kube-api-access-xpr9b\") pod \"ceilometer-0\" (UID: \"1984b123-aa0e-4af4-a396-76c783a22b45\") " pod="openstack/ceilometer-0" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.989299 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1984b123-aa0e-4af4-a396-76c783a22b45-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1984b123-aa0e-4af4-a396-76c783a22b45\") " pod="openstack/ceilometer-0" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.989351 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1984b123-aa0e-4af4-a396-76c783a22b45-log-httpd\") pod \"ceilometer-0\" (UID: \"1984b123-aa0e-4af4-a396-76c783a22b45\") " pod="openstack/ceilometer-0" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.989450 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1984b123-aa0e-4af4-a396-76c783a22b45-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1984b123-aa0e-4af4-a396-76c783a22b45\") " pod="openstack/ceilometer-0" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.989508 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1984b123-aa0e-4af4-a396-76c783a22b45-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1984b123-aa0e-4af4-a396-76c783a22b45\") " pod="openstack/ceilometer-0" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.989585 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1984b123-aa0e-4af4-a396-76c783a22b45-run-httpd\") pod \"ceilometer-0\" (UID: \"1984b123-aa0e-4af4-a396-76c783a22b45\") " pod="openstack/ceilometer-0" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.989625 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1984b123-aa0e-4af4-a396-76c783a22b45-config-data\") pod \"ceilometer-0\" (UID: \"1984b123-aa0e-4af4-a396-76c783a22b45\") " pod="openstack/ceilometer-0" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.989729 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1984b123-aa0e-4af4-a396-76c783a22b45-scripts\") pod \"ceilometer-0\" (UID: \"1984b123-aa0e-4af4-a396-76c783a22b45\") " pod="openstack/ceilometer-0" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.991453 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1984b123-aa0e-4af4-a396-76c783a22b45-run-httpd\") pod \"ceilometer-0\" (UID: \"1984b123-aa0e-4af4-a396-76c783a22b45\") " pod="openstack/ceilometer-0" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.992009 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1984b123-aa0e-4af4-a396-76c783a22b45-log-httpd\") pod \"ceilometer-0\" (UID: \"1984b123-aa0e-4af4-a396-76c783a22b45\") " pod="openstack/ceilometer-0" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.995021 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1984b123-aa0e-4af4-a396-76c783a22b45-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1984b123-aa0e-4af4-a396-76c783a22b45\") " pod="openstack/ceilometer-0" Dec 05 19:27:33 crc kubenswrapper[4828]: I1205 19:27:33.997673 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1984b123-aa0e-4af4-a396-76c783a22b45-scripts\") pod \"ceilometer-0\" (UID: \"1984b123-aa0e-4af4-a396-76c783a22b45\") " pod="openstack/ceilometer-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.000548 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1984b123-aa0e-4af4-a396-76c783a22b45-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1984b123-aa0e-4af4-a396-76c783a22b45\") " pod="openstack/ceilometer-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.000690 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1984b123-aa0e-4af4-a396-76c783a22b45-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1984b123-aa0e-4af4-a396-76c783a22b45\") " pod="openstack/ceilometer-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.001697 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1984b123-aa0e-4af4-a396-76c783a22b45-config-data\") pod \"ceilometer-0\" (UID: \"1984b123-aa0e-4af4-a396-76c783a22b45\") " pod="openstack/ceilometer-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.011805 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpr9b\" (UniqueName: \"kubernetes.io/projected/1984b123-aa0e-4af4-a396-76c783a22b45-kube-api-access-xpr9b\") pod \"ceilometer-0\" (UID: \"1984b123-aa0e-4af4-a396-76c783a22b45\") " pod="openstack/ceilometer-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.086900 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.174612 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.198551 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbwrr\" (UniqueName: \"kubernetes.io/projected/4e71cb60-681c-465a-b651-1a5cd566baa9-kube-api-access-cbwrr\") pod \"4e71cb60-681c-465a-b651-1a5cd566baa9\" (UID: \"4e71cb60-681c-465a-b651-1a5cd566baa9\") " Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.198641 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e71cb60-681c-465a-b651-1a5cd566baa9-config-data\") pod \"4e71cb60-681c-465a-b651-1a5cd566baa9\" (UID: \"4e71cb60-681c-465a-b651-1a5cd566baa9\") " Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.198680 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e71cb60-681c-465a-b651-1a5cd566baa9-logs\") pod \"4e71cb60-681c-465a-b651-1a5cd566baa9\" (UID: \"4e71cb60-681c-465a-b651-1a5cd566baa9\") " Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.198796 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e71cb60-681c-465a-b651-1a5cd566baa9-combined-ca-bundle\") pod \"4e71cb60-681c-465a-b651-1a5cd566baa9\" (UID: \"4e71cb60-681c-465a-b651-1a5cd566baa9\") " Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.199308 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e71cb60-681c-465a-b651-1a5cd566baa9-logs" (OuterVolumeSpecName: "logs") pod "4e71cb60-681c-465a-b651-1a5cd566baa9" (UID: "4e71cb60-681c-465a-b651-1a5cd566baa9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.202174 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e71cb60-681c-465a-b651-1a5cd566baa9-kube-api-access-cbwrr" (OuterVolumeSpecName: "kube-api-access-cbwrr") pod "4e71cb60-681c-465a-b651-1a5cd566baa9" (UID: "4e71cb60-681c-465a-b651-1a5cd566baa9"). InnerVolumeSpecName "kube-api-access-cbwrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.228323 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e71cb60-681c-465a-b651-1a5cd566baa9-config-data" (OuterVolumeSpecName: "config-data") pod "4e71cb60-681c-465a-b651-1a5cd566baa9" (UID: "4e71cb60-681c-465a-b651-1a5cd566baa9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.236201 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e71cb60-681c-465a-b651-1a5cd566baa9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e71cb60-681c-465a-b651-1a5cd566baa9" (UID: "4e71cb60-681c-465a-b651-1a5cd566baa9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.301300 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e71cb60-681c-465a-b651-1a5cd566baa9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.301329 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbwrr\" (UniqueName: \"kubernetes.io/projected/4e71cb60-681c-465a-b651-1a5cd566baa9-kube-api-access-cbwrr\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.301342 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e71cb60-681c-465a-b651-1a5cd566baa9-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.301353 4828 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e71cb60-681c-465a-b651-1a5cd566baa9-logs\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.457060 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="092a7272-3983-4d2e-a1de-7ef49e53c165" path="/var/lib/kubelet/pods/092a7272-3983-4d2e-a1de-7ef49e53c165/volumes" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.577127 4828 generic.go:334] "Generic (PLEG): container finished" podID="4e71cb60-681c-465a-b651-1a5cd566baa9" containerID="c501c424d717f67c2b3e662c00f5ed49e7f42a3095552e47de519ac3935c8559" exitCode=0 Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.577177 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.577204 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4e71cb60-681c-465a-b651-1a5cd566baa9","Type":"ContainerDied","Data":"c501c424d717f67c2b3e662c00f5ed49e7f42a3095552e47de519ac3935c8559"} Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.577233 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4e71cb60-681c-465a-b651-1a5cd566baa9","Type":"ContainerDied","Data":"28b8eca685e955b1a906c56b0f36bbfc061465cf816e5432d3d2e3fdf92ae537"} Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.577253 4828 scope.go:117] "RemoveContainer" containerID="c501c424d717f67c2b3e662c00f5ed49e7f42a3095552e47de519ac3935c8559" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.579706 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.601986 4828 scope.go:117] "RemoveContainer" containerID="1f03f8da067f45a2544b8279f42952f24810a5f9e05414d6d126bee9f451a6a7" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.602693 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.656518 4828 scope.go:117] "RemoveContainer" containerID="c501c424d717f67c2b3e662c00f5ed49e7f42a3095552e47de519ac3935c8559" Dec 05 19:27:34 crc kubenswrapper[4828]: E1205 19:27:34.656971 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c501c424d717f67c2b3e662c00f5ed49e7f42a3095552e47de519ac3935c8559\": container with ID starting with c501c424d717f67c2b3e662c00f5ed49e7f42a3095552e47de519ac3935c8559 not found: ID does not exist" containerID="c501c424d717f67c2b3e662c00f5ed49e7f42a3095552e47de519ac3935c8559" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.657002 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c501c424d717f67c2b3e662c00f5ed49e7f42a3095552e47de519ac3935c8559"} err="failed to get container status \"c501c424d717f67c2b3e662c00f5ed49e7f42a3095552e47de519ac3935c8559\": rpc error: code = NotFound desc = could not find container \"c501c424d717f67c2b3e662c00f5ed49e7f42a3095552e47de519ac3935c8559\": container with ID starting with c501c424d717f67c2b3e662c00f5ed49e7f42a3095552e47de519ac3935c8559 not found: ID does not exist" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.657025 4828 scope.go:117] "RemoveContainer" containerID="1f03f8da067f45a2544b8279f42952f24810a5f9e05414d6d126bee9f451a6a7" Dec 05 19:27:34 crc kubenswrapper[4828]: E1205 19:27:34.659563 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f03f8da067f45a2544b8279f42952f24810a5f9e05414d6d126bee9f451a6a7\": container with ID starting with 1f03f8da067f45a2544b8279f42952f24810a5f9e05414d6d126bee9f451a6a7 not found: ID does not exist" containerID="1f03f8da067f45a2544b8279f42952f24810a5f9e05414d6d126bee9f451a6a7" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.659597 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f03f8da067f45a2544b8279f42952f24810a5f9e05414d6d126bee9f451a6a7"} err="failed to get container status \"1f03f8da067f45a2544b8279f42952f24810a5f9e05414d6d126bee9f451a6a7\": rpc error: code = NotFound desc = could not find container \"1f03f8da067f45a2544b8279f42952f24810a5f9e05414d6d126bee9f451a6a7\": container with ID starting with 1f03f8da067f45a2544b8279f42952f24810a5f9e05414d6d126bee9f451a6a7 not found: ID does not exist" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.662554 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.686583 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 05 19:27:34 crc kubenswrapper[4828]: E1205 19:27:34.687048 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e71cb60-681c-465a-b651-1a5cd566baa9" containerName="nova-api-log" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.687067 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e71cb60-681c-465a-b651-1a5cd566baa9" containerName="nova-api-log" Dec 05 19:27:34 crc kubenswrapper[4828]: E1205 19:27:34.687104 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e71cb60-681c-465a-b651-1a5cd566baa9" containerName="nova-api-api" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.687112 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e71cb60-681c-465a-b651-1a5cd566baa9" containerName="nova-api-api" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.687285 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e71cb60-681c-465a-b651-1a5cd566baa9" containerName="nova-api-log" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.687304 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e71cb60-681c-465a-b651-1a5cd566baa9" containerName="nova-api-api" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.688284 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.691687 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.692476 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.692687 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.696990 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.708763 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.726198 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.736563 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.738237 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.743410 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.743697 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.743814 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.749992 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.760413 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.811091 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f127a8bd-9835-442a-a12d-7eeae0cf6296-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f127a8bd-9835-442a-a12d-7eeae0cf6296\") " pod="openstack/nova-api-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.811140 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8nzl\" (UniqueName: \"kubernetes.io/projected/f127a8bd-9835-442a-a12d-7eeae0cf6296-kube-api-access-w8nzl\") pod \"nova-api-0\" (UID: \"f127a8bd-9835-442a-a12d-7eeae0cf6296\") " pod="openstack/nova-api-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.811961 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f127a8bd-9835-442a-a12d-7eeae0cf6296-public-tls-certs\") pod \"nova-api-0\" (UID: \"f127a8bd-9835-442a-a12d-7eeae0cf6296\") " pod="openstack/nova-api-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.812959 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f127a8bd-9835-442a-a12d-7eeae0cf6296-config-data\") pod \"nova-api-0\" (UID: \"f127a8bd-9835-442a-a12d-7eeae0cf6296\") " pod="openstack/nova-api-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.813010 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f127a8bd-9835-442a-a12d-7eeae0cf6296-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f127a8bd-9835-442a-a12d-7eeae0cf6296\") " pod="openstack/nova-api-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.813213 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f127a8bd-9835-442a-a12d-7eeae0cf6296-logs\") pod \"nova-api-0\" (UID: \"f127a8bd-9835-442a-a12d-7eeae0cf6296\") " pod="openstack/nova-api-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.914487 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddf78da4-c3d6-41b2-b8e1-803e3f075586-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ddf78da4-c3d6-41b2-b8e1-803e3f075586\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.914570 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f127a8bd-9835-442a-a12d-7eeae0cf6296-config-data\") pod \"nova-api-0\" (UID: \"f127a8bd-9835-442a-a12d-7eeae0cf6296\") " pod="openstack/nova-api-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.914616 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f127a8bd-9835-442a-a12d-7eeae0cf6296-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f127a8bd-9835-442a-a12d-7eeae0cf6296\") " pod="openstack/nova-api-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.914658 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf78da4-c3d6-41b2-b8e1-803e3f075586-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ddf78da4-c3d6-41b2-b8e1-803e3f075586\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.914697 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mvwv\" (UniqueName: \"kubernetes.io/projected/ddf78da4-c3d6-41b2-b8e1-803e3f075586-kube-api-access-2mvwv\") pod \"nova-cell1-novncproxy-0\" (UID: \"ddf78da4-c3d6-41b2-b8e1-803e3f075586\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.914741 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddf78da4-c3d6-41b2-b8e1-803e3f075586-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ddf78da4-c3d6-41b2-b8e1-803e3f075586\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.914840 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f127a8bd-9835-442a-a12d-7eeae0cf6296-logs\") pod \"nova-api-0\" (UID: \"f127a8bd-9835-442a-a12d-7eeae0cf6296\") " pod="openstack/nova-api-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.914909 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f127a8bd-9835-442a-a12d-7eeae0cf6296-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f127a8bd-9835-442a-a12d-7eeae0cf6296\") " pod="openstack/nova-api-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.914931 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8nzl\" (UniqueName: \"kubernetes.io/projected/f127a8bd-9835-442a-a12d-7eeae0cf6296-kube-api-access-w8nzl\") pod \"nova-api-0\" (UID: \"f127a8bd-9835-442a-a12d-7eeae0cf6296\") " pod="openstack/nova-api-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.914954 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddf78da4-c3d6-41b2-b8e1-803e3f075586-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ddf78da4-c3d6-41b2-b8e1-803e3f075586\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.915001 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f127a8bd-9835-442a-a12d-7eeae0cf6296-public-tls-certs\") pod \"nova-api-0\" (UID: \"f127a8bd-9835-442a-a12d-7eeae0cf6296\") " pod="openstack/nova-api-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.915537 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f127a8bd-9835-442a-a12d-7eeae0cf6296-logs\") pod \"nova-api-0\" (UID: \"f127a8bd-9835-442a-a12d-7eeae0cf6296\") " pod="openstack/nova-api-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.936511 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f127a8bd-9835-442a-a12d-7eeae0cf6296-config-data\") pod \"nova-api-0\" (UID: \"f127a8bd-9835-442a-a12d-7eeae0cf6296\") " pod="openstack/nova-api-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.937003 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f127a8bd-9835-442a-a12d-7eeae0cf6296-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f127a8bd-9835-442a-a12d-7eeae0cf6296\") " pod="openstack/nova-api-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.941097 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f127a8bd-9835-442a-a12d-7eeae0cf6296-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f127a8bd-9835-442a-a12d-7eeae0cf6296\") " pod="openstack/nova-api-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.944688 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f127a8bd-9835-442a-a12d-7eeae0cf6296-public-tls-certs\") pod \"nova-api-0\" (UID: \"f127a8bd-9835-442a-a12d-7eeae0cf6296\") " pod="openstack/nova-api-0" Dec 05 19:27:34 crc kubenswrapper[4828]: I1205 19:27:34.948987 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8nzl\" (UniqueName: \"kubernetes.io/projected/f127a8bd-9835-442a-a12d-7eeae0cf6296-kube-api-access-w8nzl\") pod \"nova-api-0\" (UID: \"f127a8bd-9835-442a-a12d-7eeae0cf6296\") " pod="openstack/nova-api-0" Dec 05 19:27:35 crc kubenswrapper[4828]: I1205 19:27:35.016231 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddf78da4-c3d6-41b2-b8e1-803e3f075586-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ddf78da4-c3d6-41b2-b8e1-803e3f075586\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:27:35 crc kubenswrapper[4828]: I1205 19:27:35.016315 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf78da4-c3d6-41b2-b8e1-803e3f075586-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ddf78da4-c3d6-41b2-b8e1-803e3f075586\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:27:35 crc kubenswrapper[4828]: I1205 19:27:35.016356 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mvwv\" (UniqueName: \"kubernetes.io/projected/ddf78da4-c3d6-41b2-b8e1-803e3f075586-kube-api-access-2mvwv\") pod \"nova-cell1-novncproxy-0\" (UID: \"ddf78da4-c3d6-41b2-b8e1-803e3f075586\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:27:35 crc kubenswrapper[4828]: I1205 19:27:35.016384 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddf78da4-c3d6-41b2-b8e1-803e3f075586-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ddf78da4-c3d6-41b2-b8e1-803e3f075586\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:27:35 crc kubenswrapper[4828]: I1205 19:27:35.016435 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddf78da4-c3d6-41b2-b8e1-803e3f075586-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ddf78da4-c3d6-41b2-b8e1-803e3f075586\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:27:35 crc kubenswrapper[4828]: I1205 19:27:35.019103 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 19:27:35 crc kubenswrapper[4828]: I1205 19:27:35.032469 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddf78da4-c3d6-41b2-b8e1-803e3f075586-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ddf78da4-c3d6-41b2-b8e1-803e3f075586\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:27:35 crc kubenswrapper[4828]: I1205 19:27:35.032596 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf78da4-c3d6-41b2-b8e1-803e3f075586-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ddf78da4-c3d6-41b2-b8e1-803e3f075586\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:27:35 crc kubenswrapper[4828]: I1205 19:27:35.032741 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddf78da4-c3d6-41b2-b8e1-803e3f075586-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ddf78da4-c3d6-41b2-b8e1-803e3f075586\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:27:35 crc kubenswrapper[4828]: I1205 19:27:35.037800 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddf78da4-c3d6-41b2-b8e1-803e3f075586-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ddf78da4-c3d6-41b2-b8e1-803e3f075586\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:27:35 crc kubenswrapper[4828]: I1205 19:27:35.038739 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mvwv\" (UniqueName: \"kubernetes.io/projected/ddf78da4-c3d6-41b2-b8e1-803e3f075586-kube-api-access-2mvwv\") pod \"nova-cell1-novncproxy-0\" (UID: \"ddf78da4-c3d6-41b2-b8e1-803e3f075586\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:27:35 crc kubenswrapper[4828]: I1205 19:27:35.097023 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:27:35 crc kubenswrapper[4828]: I1205 19:27:35.618184 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1984b123-aa0e-4af4-a396-76c783a22b45","Type":"ContainerStarted","Data":"4bec0f2053944387c0c225494b4c04254593d5c57f0b5b90cc152556f13cc97e"} Dec 05 19:27:35 crc kubenswrapper[4828]: I1205 19:27:35.623663 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 19:27:35 crc kubenswrapper[4828]: I1205 19:27:35.687714 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 19:27:36 crc kubenswrapper[4828]: I1205 19:27:36.477499 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="078b2cf8-a9b3-422f-b0e9-2d60586d9062" path="/var/lib/kubelet/pods/078b2cf8-a9b3-422f-b0e9-2d60586d9062/volumes" Dec 05 19:27:36 crc kubenswrapper[4828]: I1205 19:27:36.478958 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e71cb60-681c-465a-b651-1a5cd566baa9" path="/var/lib/kubelet/pods/4e71cb60-681c-465a-b651-1a5cd566baa9/volumes" Dec 05 19:27:36 crc kubenswrapper[4828]: I1205 19:27:36.630163 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f127a8bd-9835-442a-a12d-7eeae0cf6296","Type":"ContainerStarted","Data":"054cf6d7dd83863ebb6653649dd3312c6acf9bb5bcd1e08de24ff47767a47454"} Dec 05 19:27:36 crc kubenswrapper[4828]: I1205 19:27:36.630213 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f127a8bd-9835-442a-a12d-7eeae0cf6296","Type":"ContainerStarted","Data":"38fa6d5e90d2b2a54af9f8970a1ad3237af02f6a578c519d5aa78649cb31b9ac"} Dec 05 19:27:36 crc kubenswrapper[4828]: I1205 19:27:36.630225 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f127a8bd-9835-442a-a12d-7eeae0cf6296","Type":"ContainerStarted","Data":"dd4c31ce977ce3c0ec13977a08d6714b8080cec631a30ae66d70edc9060a6490"} Dec 05 19:27:36 crc kubenswrapper[4828]: I1205 19:27:36.632152 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1984b123-aa0e-4af4-a396-76c783a22b45","Type":"ContainerStarted","Data":"2dc25b3ff748111d9ff6f48adf15ab2f587bada64279e03c0f4fe412dc0229c7"} Dec 05 19:27:36 crc kubenswrapper[4828]: I1205 19:27:36.632188 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1984b123-aa0e-4af4-a396-76c783a22b45","Type":"ContainerStarted","Data":"5a1ab3721b4476354fc53b2836795e9e2c397cc66d0a35754f0f4e905006bced"} Dec 05 19:27:36 crc kubenswrapper[4828]: I1205 19:27:36.633954 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ddf78da4-c3d6-41b2-b8e1-803e3f075586","Type":"ContainerStarted","Data":"bb5209088e7874ec50a2a0c51cfb32d5710d3a6eed1cdd58ae49d5e9fdd2198a"} Dec 05 19:27:36 crc kubenswrapper[4828]: I1205 19:27:36.633982 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ddf78da4-c3d6-41b2-b8e1-803e3f075586","Type":"ContainerStarted","Data":"e4382136461d86e8222aaea623cf07574e40048f78f74a48d51e457d18ea42de"} Dec 05 19:27:36 crc kubenswrapper[4828]: I1205 19:27:36.656788 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.656766167 podStartE2EDuration="2.656766167s" podCreationTimestamp="2025-12-05 19:27:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:27:36.648403441 +0000 UTC m=+1434.543625747" watchObservedRunningTime="2025-12-05 19:27:36.656766167 +0000 UTC m=+1434.551988473" Dec 05 19:27:36 crc kubenswrapper[4828]: I1205 19:27:36.673912 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.673887629 podStartE2EDuration="2.673887629s" podCreationTimestamp="2025-12-05 19:27:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:27:36.669512971 +0000 UTC m=+1434.564735297" watchObservedRunningTime="2025-12-05 19:27:36.673887629 +0000 UTC m=+1434.569109935" Dec 05 19:27:37 crc kubenswrapper[4828]: I1205 19:27:37.646989 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1984b123-aa0e-4af4-a396-76c783a22b45","Type":"ContainerStarted","Data":"e1d6a7218fe2bbc2651b615a8002c719927b56681abb89e0e6b8d00579c63455"} Dec 05 19:27:37 crc kubenswrapper[4828]: I1205 19:27:37.969062 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" Dec 05 19:27:38 crc kubenswrapper[4828]: I1205 19:27:38.028633 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-d2lkm"] Dec 05 19:27:38 crc kubenswrapper[4828]: I1205 19:27:38.028912 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" podUID="0fc12a4b-c235-4a30-b616-06b6ccb812a0" containerName="dnsmasq-dns" containerID="cri-o://afc1891d2c3a3b9b3b050456cb66b4f37ea89998ae0b50557706423a56fc7b16" gracePeriod=10 Dec 05 19:27:38 crc kubenswrapper[4828]: I1205 19:27:38.657259 4828 generic.go:334] "Generic (PLEG): container finished" podID="0fc12a4b-c235-4a30-b616-06b6ccb812a0" containerID="afc1891d2c3a3b9b3b050456cb66b4f37ea89998ae0b50557706423a56fc7b16" exitCode=0 Dec 05 19:27:38 crc kubenswrapper[4828]: I1205 19:27:38.657471 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" event={"ID":"0fc12a4b-c235-4a30-b616-06b6ccb812a0","Type":"ContainerDied","Data":"afc1891d2c3a3b9b3b050456cb66b4f37ea89998ae0b50557706423a56fc7b16"} Dec 05 19:27:38 crc kubenswrapper[4828]: I1205 19:27:38.657904 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" event={"ID":"0fc12a4b-c235-4a30-b616-06b6ccb812a0","Type":"ContainerDied","Data":"7c833188e77bf40fbdc83fc4d10dffce54f175de167f2ded7b9c504713d402d7"} Dec 05 19:27:38 crc kubenswrapper[4828]: I1205 19:27:38.657928 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c833188e77bf40fbdc83fc4d10dffce54f175de167f2ded7b9c504713d402d7" Dec 05 19:27:38 crc kubenswrapper[4828]: I1205 19:27:38.675624 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" Dec 05 19:27:38 crc kubenswrapper[4828]: I1205 19:27:38.781977 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fc12a4b-c235-4a30-b616-06b6ccb812a0-ovsdbserver-nb\") pod \"0fc12a4b-c235-4a30-b616-06b6ccb812a0\" (UID: \"0fc12a4b-c235-4a30-b616-06b6ccb812a0\") " Dec 05 19:27:38 crc kubenswrapper[4828]: I1205 19:27:38.782123 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fc12a4b-c235-4a30-b616-06b6ccb812a0-dns-svc\") pod \"0fc12a4b-c235-4a30-b616-06b6ccb812a0\" (UID: \"0fc12a4b-c235-4a30-b616-06b6ccb812a0\") " Dec 05 19:27:38 crc kubenswrapper[4828]: I1205 19:27:38.782150 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fc12a4b-c235-4a30-b616-06b6ccb812a0-config\") pod \"0fc12a4b-c235-4a30-b616-06b6ccb812a0\" (UID: \"0fc12a4b-c235-4a30-b616-06b6ccb812a0\") " Dec 05 19:27:38 crc kubenswrapper[4828]: I1205 19:27:38.782184 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx2zt\" (UniqueName: \"kubernetes.io/projected/0fc12a4b-c235-4a30-b616-06b6ccb812a0-kube-api-access-hx2zt\") pod \"0fc12a4b-c235-4a30-b616-06b6ccb812a0\" (UID: \"0fc12a4b-c235-4a30-b616-06b6ccb812a0\") " Dec 05 19:27:38 crc kubenswrapper[4828]: I1205 19:27:38.782272 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fc12a4b-c235-4a30-b616-06b6ccb812a0-ovsdbserver-sb\") pod \"0fc12a4b-c235-4a30-b616-06b6ccb812a0\" (UID: \"0fc12a4b-c235-4a30-b616-06b6ccb812a0\") " Dec 05 19:27:38 crc kubenswrapper[4828]: I1205 19:27:38.782336 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0fc12a4b-c235-4a30-b616-06b6ccb812a0-dns-swift-storage-0\") pod \"0fc12a4b-c235-4a30-b616-06b6ccb812a0\" (UID: \"0fc12a4b-c235-4a30-b616-06b6ccb812a0\") " Dec 05 19:27:38 crc kubenswrapper[4828]: I1205 19:27:38.790445 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fc12a4b-c235-4a30-b616-06b6ccb812a0-kube-api-access-hx2zt" (OuterVolumeSpecName: "kube-api-access-hx2zt") pod "0fc12a4b-c235-4a30-b616-06b6ccb812a0" (UID: "0fc12a4b-c235-4a30-b616-06b6ccb812a0"). InnerVolumeSpecName "kube-api-access-hx2zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:27:38 crc kubenswrapper[4828]: I1205 19:27:38.856693 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fc12a4b-c235-4a30-b616-06b6ccb812a0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0fc12a4b-c235-4a30-b616-06b6ccb812a0" (UID: "0fc12a4b-c235-4a30-b616-06b6ccb812a0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:27:38 crc kubenswrapper[4828]: I1205 19:27:38.861984 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fc12a4b-c235-4a30-b616-06b6ccb812a0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0fc12a4b-c235-4a30-b616-06b6ccb812a0" (UID: "0fc12a4b-c235-4a30-b616-06b6ccb812a0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:27:38 crc kubenswrapper[4828]: I1205 19:27:38.863273 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fc12a4b-c235-4a30-b616-06b6ccb812a0-config" (OuterVolumeSpecName: "config") pod "0fc12a4b-c235-4a30-b616-06b6ccb812a0" (UID: "0fc12a4b-c235-4a30-b616-06b6ccb812a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:27:38 crc kubenswrapper[4828]: I1205 19:27:38.884600 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fc12a4b-c235-4a30-b616-06b6ccb812a0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:38 crc kubenswrapper[4828]: I1205 19:27:38.884634 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fc12a4b-c235-4a30-b616-06b6ccb812a0-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:38 crc kubenswrapper[4828]: I1205 19:27:38.884646 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fc12a4b-c235-4a30-b616-06b6ccb812a0-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:38 crc kubenswrapper[4828]: I1205 19:27:38.884654 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx2zt\" (UniqueName: \"kubernetes.io/projected/0fc12a4b-c235-4a30-b616-06b6ccb812a0-kube-api-access-hx2zt\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:38 crc kubenswrapper[4828]: I1205 19:27:38.886324 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fc12a4b-c235-4a30-b616-06b6ccb812a0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0fc12a4b-c235-4a30-b616-06b6ccb812a0" (UID: "0fc12a4b-c235-4a30-b616-06b6ccb812a0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:27:38 crc kubenswrapper[4828]: I1205 19:27:38.893048 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fc12a4b-c235-4a30-b616-06b6ccb812a0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0fc12a4b-c235-4a30-b616-06b6ccb812a0" (UID: "0fc12a4b-c235-4a30-b616-06b6ccb812a0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:27:38 crc kubenswrapper[4828]: I1205 19:27:38.986131 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fc12a4b-c235-4a30-b616-06b6ccb812a0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:38 crc kubenswrapper[4828]: I1205 19:27:38.986173 4828 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0fc12a4b-c235-4a30-b616-06b6ccb812a0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:39 crc kubenswrapper[4828]: I1205 19:27:39.673878 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-d2lkm" Dec 05 19:27:39 crc kubenswrapper[4828]: I1205 19:27:39.675885 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1984b123-aa0e-4af4-a396-76c783a22b45","Type":"ContainerStarted","Data":"7ccf259e5f308d1be0f2c5891f8474c7929ca4951c8642f79911cf4248400272"} Dec 05 19:27:39 crc kubenswrapper[4828]: I1205 19:27:39.676267 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 19:27:39 crc kubenswrapper[4828]: I1205 19:27:39.712765 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.968811536 podStartE2EDuration="6.712748875s" podCreationTimestamp="2025-12-05 19:27:33 +0000 UTC" firstStartedPulling="2025-12-05 19:27:34.732633542 +0000 UTC m=+1432.627855848" lastFinishedPulling="2025-12-05 19:27:38.476570881 +0000 UTC m=+1436.371793187" observedRunningTime="2025-12-05 19:27:39.696479526 +0000 UTC m=+1437.591701832" watchObservedRunningTime="2025-12-05 19:27:39.712748875 +0000 UTC m=+1437.607971181" Dec 05 19:27:39 crc kubenswrapper[4828]: I1205 19:27:39.734656 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-d2lkm"] Dec 05 19:27:39 crc kubenswrapper[4828]: I1205 19:27:39.743110 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-d2lkm"] Dec 05 19:27:40 crc kubenswrapper[4828]: I1205 19:27:40.097970 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:27:40 crc kubenswrapper[4828]: I1205 19:27:40.465412 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fc12a4b-c235-4a30-b616-06b6ccb812a0" path="/var/lib/kubelet/pods/0fc12a4b-c235-4a30-b616-06b6ccb812a0/volumes" Dec 05 19:27:45 crc kubenswrapper[4828]: I1205 19:27:45.019647 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 19:27:45 crc kubenswrapper[4828]: I1205 19:27:45.020225 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 19:27:45 crc kubenswrapper[4828]: I1205 19:27:45.129765 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:27:45 crc kubenswrapper[4828]: I1205 19:27:45.165864 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:27:45 crc kubenswrapper[4828]: I1205 19:27:45.808817 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 05 19:27:45 crc kubenswrapper[4828]: I1205 19:27:45.992102 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-5qtt7"] Dec 05 19:27:45 crc kubenswrapper[4828]: E1205 19:27:45.992491 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fc12a4b-c235-4a30-b616-06b6ccb812a0" containerName="dnsmasq-dns" Dec 05 19:27:45 crc kubenswrapper[4828]: I1205 19:27:45.992507 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fc12a4b-c235-4a30-b616-06b6ccb812a0" containerName="dnsmasq-dns" Dec 05 19:27:45 crc kubenswrapper[4828]: E1205 19:27:45.992519 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fc12a4b-c235-4a30-b616-06b6ccb812a0" containerName="init" Dec 05 19:27:45 crc kubenswrapper[4828]: I1205 19:27:45.992526 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fc12a4b-c235-4a30-b616-06b6ccb812a0" containerName="init" Dec 05 19:27:45 crc kubenswrapper[4828]: I1205 19:27:45.992725 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fc12a4b-c235-4a30-b616-06b6ccb812a0" containerName="dnsmasq-dns" Dec 05 19:27:45 crc kubenswrapper[4828]: I1205 19:27:45.995236 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5qtt7" Dec 05 19:27:45 crc kubenswrapper[4828]: I1205 19:27:45.997179 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 05 19:27:45 crc kubenswrapper[4828]: I1205 19:27:45.997670 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 05 19:27:46 crc kubenswrapper[4828]: I1205 19:27:46.005658 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5qtt7"] Dec 05 19:27:46 crc kubenswrapper[4828]: I1205 19:27:46.030978 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f127a8bd-9835-442a-a12d-7eeae0cf6296" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.198:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 19:27:46 crc kubenswrapper[4828]: I1205 19:27:46.031297 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f127a8bd-9835-442a-a12d-7eeae0cf6296" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 19:27:46 crc kubenswrapper[4828]: I1205 19:27:46.066651 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4931033-0f5f-4d93-b809-45da0865ddfa-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5qtt7\" (UID: \"b4931033-0f5f-4d93-b809-45da0865ddfa\") " pod="openstack/nova-cell1-cell-mapping-5qtt7" Dec 05 19:27:46 crc kubenswrapper[4828]: I1205 19:27:46.067104 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4931033-0f5f-4d93-b809-45da0865ddfa-config-data\") pod \"nova-cell1-cell-mapping-5qtt7\" (UID: \"b4931033-0f5f-4d93-b809-45da0865ddfa\") " pod="openstack/nova-cell1-cell-mapping-5qtt7" Dec 05 19:27:46 crc kubenswrapper[4828]: I1205 19:27:46.067223 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4931033-0f5f-4d93-b809-45da0865ddfa-scripts\") pod \"nova-cell1-cell-mapping-5qtt7\" (UID: \"b4931033-0f5f-4d93-b809-45da0865ddfa\") " pod="openstack/nova-cell1-cell-mapping-5qtt7" Dec 05 19:27:46 crc kubenswrapper[4828]: I1205 19:27:46.067310 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjbq4\" (UniqueName: \"kubernetes.io/projected/b4931033-0f5f-4d93-b809-45da0865ddfa-kube-api-access-jjbq4\") pod \"nova-cell1-cell-mapping-5qtt7\" (UID: \"b4931033-0f5f-4d93-b809-45da0865ddfa\") " pod="openstack/nova-cell1-cell-mapping-5qtt7" Dec 05 19:27:46 crc kubenswrapper[4828]: I1205 19:27:46.169011 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4931033-0f5f-4d93-b809-45da0865ddfa-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5qtt7\" (UID: \"b4931033-0f5f-4d93-b809-45da0865ddfa\") " pod="openstack/nova-cell1-cell-mapping-5qtt7" Dec 05 19:27:46 crc kubenswrapper[4828]: I1205 19:27:46.169136 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4931033-0f5f-4d93-b809-45da0865ddfa-config-data\") pod \"nova-cell1-cell-mapping-5qtt7\" (UID: \"b4931033-0f5f-4d93-b809-45da0865ddfa\") " pod="openstack/nova-cell1-cell-mapping-5qtt7" Dec 05 19:27:46 crc kubenswrapper[4828]: I1205 19:27:46.169177 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4931033-0f5f-4d93-b809-45da0865ddfa-scripts\") pod \"nova-cell1-cell-mapping-5qtt7\" (UID: \"b4931033-0f5f-4d93-b809-45da0865ddfa\") " pod="openstack/nova-cell1-cell-mapping-5qtt7" Dec 05 19:27:46 crc kubenswrapper[4828]: I1205 19:27:46.169208 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjbq4\" (UniqueName: \"kubernetes.io/projected/b4931033-0f5f-4d93-b809-45da0865ddfa-kube-api-access-jjbq4\") pod \"nova-cell1-cell-mapping-5qtt7\" (UID: \"b4931033-0f5f-4d93-b809-45da0865ddfa\") " pod="openstack/nova-cell1-cell-mapping-5qtt7" Dec 05 19:27:46 crc kubenswrapper[4828]: I1205 19:27:46.177854 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4931033-0f5f-4d93-b809-45da0865ddfa-config-data\") pod \"nova-cell1-cell-mapping-5qtt7\" (UID: \"b4931033-0f5f-4d93-b809-45da0865ddfa\") " pod="openstack/nova-cell1-cell-mapping-5qtt7" Dec 05 19:27:46 crc kubenswrapper[4828]: I1205 19:27:46.178448 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4931033-0f5f-4d93-b809-45da0865ddfa-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5qtt7\" (UID: \"b4931033-0f5f-4d93-b809-45da0865ddfa\") " pod="openstack/nova-cell1-cell-mapping-5qtt7" Dec 05 19:27:46 crc kubenswrapper[4828]: I1205 19:27:46.178697 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4931033-0f5f-4d93-b809-45da0865ddfa-scripts\") pod \"nova-cell1-cell-mapping-5qtt7\" (UID: \"b4931033-0f5f-4d93-b809-45da0865ddfa\") " pod="openstack/nova-cell1-cell-mapping-5qtt7" Dec 05 19:27:46 crc kubenswrapper[4828]: I1205 19:27:46.188881 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjbq4\" (UniqueName: \"kubernetes.io/projected/b4931033-0f5f-4d93-b809-45da0865ddfa-kube-api-access-jjbq4\") pod \"nova-cell1-cell-mapping-5qtt7\" (UID: \"b4931033-0f5f-4d93-b809-45da0865ddfa\") " pod="openstack/nova-cell1-cell-mapping-5qtt7" Dec 05 19:27:46 crc kubenswrapper[4828]: I1205 19:27:46.321890 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5qtt7" Dec 05 19:27:46 crc kubenswrapper[4828]: I1205 19:27:46.834793 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5qtt7"] Dec 05 19:27:46 crc kubenswrapper[4828]: W1205 19:27:46.839309 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4931033_0f5f_4d93_b809_45da0865ddfa.slice/crio-b5ffd6cdb5068adf5ee2b053d068b9117da7d8acaebea66e866ca99300e772bc WatchSource:0}: Error finding container b5ffd6cdb5068adf5ee2b053d068b9117da7d8acaebea66e866ca99300e772bc: Status 404 returned error can't find the container with id b5ffd6cdb5068adf5ee2b053d068b9117da7d8acaebea66e866ca99300e772bc Dec 05 19:27:47 crc kubenswrapper[4828]: I1205 19:27:47.818031 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5qtt7" event={"ID":"b4931033-0f5f-4d93-b809-45da0865ddfa","Type":"ContainerStarted","Data":"f0e5f1df9a55681c8f67d807d61cf9713a0f9fa5f9a56a7b9d54425efa189eca"} Dec 05 19:27:47 crc kubenswrapper[4828]: I1205 19:27:47.818367 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5qtt7" event={"ID":"b4931033-0f5f-4d93-b809-45da0865ddfa","Type":"ContainerStarted","Data":"b5ffd6cdb5068adf5ee2b053d068b9117da7d8acaebea66e866ca99300e772bc"} Dec 05 19:27:47 crc kubenswrapper[4828]: I1205 19:27:47.851686 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-5qtt7" podStartSLOduration=2.851655524 podStartE2EDuration="2.851655524s" podCreationTimestamp="2025-12-05 19:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:27:47.833886343 +0000 UTC m=+1445.729108689" watchObservedRunningTime="2025-12-05 19:27:47.851655524 +0000 UTC m=+1445.746877870" Dec 05 19:27:51 crc kubenswrapper[4828]: I1205 19:27:51.855665 4828 generic.go:334] "Generic (PLEG): container finished" podID="b4931033-0f5f-4d93-b809-45da0865ddfa" containerID="f0e5f1df9a55681c8f67d807d61cf9713a0f9fa5f9a56a7b9d54425efa189eca" exitCode=0 Dec 05 19:27:51 crc kubenswrapper[4828]: I1205 19:27:51.855776 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5qtt7" event={"ID":"b4931033-0f5f-4d93-b809-45da0865ddfa","Type":"ContainerDied","Data":"f0e5f1df9a55681c8f67d807d61cf9713a0f9fa5f9a56a7b9d54425efa189eca"} Dec 05 19:27:53 crc kubenswrapper[4828]: I1205 19:27:53.258231 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5qtt7" Dec 05 19:27:53 crc kubenswrapper[4828]: I1205 19:27:53.382671 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4931033-0f5f-4d93-b809-45da0865ddfa-combined-ca-bundle\") pod \"b4931033-0f5f-4d93-b809-45da0865ddfa\" (UID: \"b4931033-0f5f-4d93-b809-45da0865ddfa\") " Dec 05 19:27:53 crc kubenswrapper[4828]: I1205 19:27:53.382813 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4931033-0f5f-4d93-b809-45da0865ddfa-config-data\") pod \"b4931033-0f5f-4d93-b809-45da0865ddfa\" (UID: \"b4931033-0f5f-4d93-b809-45da0865ddfa\") " Dec 05 19:27:53 crc kubenswrapper[4828]: I1205 19:27:53.383016 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjbq4\" (UniqueName: \"kubernetes.io/projected/b4931033-0f5f-4d93-b809-45da0865ddfa-kube-api-access-jjbq4\") pod \"b4931033-0f5f-4d93-b809-45da0865ddfa\" (UID: \"b4931033-0f5f-4d93-b809-45da0865ddfa\") " Dec 05 19:27:53 crc kubenswrapper[4828]: I1205 19:27:53.383039 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4931033-0f5f-4d93-b809-45da0865ddfa-scripts\") pod \"b4931033-0f5f-4d93-b809-45da0865ddfa\" (UID: \"b4931033-0f5f-4d93-b809-45da0865ddfa\") " Dec 05 19:27:53 crc kubenswrapper[4828]: I1205 19:27:53.388257 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4931033-0f5f-4d93-b809-45da0865ddfa-scripts" (OuterVolumeSpecName: "scripts") pod "b4931033-0f5f-4d93-b809-45da0865ddfa" (UID: "b4931033-0f5f-4d93-b809-45da0865ddfa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:53 crc kubenswrapper[4828]: I1205 19:27:53.388537 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4931033-0f5f-4d93-b809-45da0865ddfa-kube-api-access-jjbq4" (OuterVolumeSpecName: "kube-api-access-jjbq4") pod "b4931033-0f5f-4d93-b809-45da0865ddfa" (UID: "b4931033-0f5f-4d93-b809-45da0865ddfa"). InnerVolumeSpecName "kube-api-access-jjbq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:27:53 crc kubenswrapper[4828]: I1205 19:27:53.410960 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4931033-0f5f-4d93-b809-45da0865ddfa-config-data" (OuterVolumeSpecName: "config-data") pod "b4931033-0f5f-4d93-b809-45da0865ddfa" (UID: "b4931033-0f5f-4d93-b809-45da0865ddfa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:53 crc kubenswrapper[4828]: I1205 19:27:53.411562 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4931033-0f5f-4d93-b809-45da0865ddfa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4931033-0f5f-4d93-b809-45da0865ddfa" (UID: "b4931033-0f5f-4d93-b809-45da0865ddfa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:53 crc kubenswrapper[4828]: I1205 19:27:53.486349 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjbq4\" (UniqueName: \"kubernetes.io/projected/b4931033-0f5f-4d93-b809-45da0865ddfa-kube-api-access-jjbq4\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:53 crc kubenswrapper[4828]: I1205 19:27:53.486409 4828 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4931033-0f5f-4d93-b809-45da0865ddfa-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:53 crc kubenswrapper[4828]: I1205 19:27:53.486421 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4931033-0f5f-4d93-b809-45da0865ddfa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:53 crc kubenswrapper[4828]: I1205 19:27:53.486430 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4931033-0f5f-4d93-b809-45da0865ddfa-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:53 crc kubenswrapper[4828]: I1205 19:27:53.879060 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5qtt7" event={"ID":"b4931033-0f5f-4d93-b809-45da0865ddfa","Type":"ContainerDied","Data":"b5ffd6cdb5068adf5ee2b053d068b9117da7d8acaebea66e866ca99300e772bc"} Dec 05 19:27:53 crc kubenswrapper[4828]: I1205 19:27:53.879103 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5ffd6cdb5068adf5ee2b053d068b9117da7d8acaebea66e866ca99300e772bc" Dec 05 19:27:53 crc kubenswrapper[4828]: I1205 19:27:53.879108 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5qtt7" Dec 05 19:27:54 crc kubenswrapper[4828]: I1205 19:27:54.060260 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 19:27:54 crc kubenswrapper[4828]: I1205 19:27:54.060555 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f127a8bd-9835-442a-a12d-7eeae0cf6296" containerName="nova-api-log" containerID="cri-o://38fa6d5e90d2b2a54af9f8970a1ad3237af02f6a578c519d5aa78649cb31b9ac" gracePeriod=30 Dec 05 19:27:54 crc kubenswrapper[4828]: I1205 19:27:54.060655 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f127a8bd-9835-442a-a12d-7eeae0cf6296" containerName="nova-api-api" containerID="cri-o://054cf6d7dd83863ebb6653649dd3312c6acf9bb5bcd1e08de24ff47767a47454" gracePeriod=30 Dec 05 19:27:54 crc kubenswrapper[4828]: I1205 19:27:54.087371 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 19:27:54 crc kubenswrapper[4828]: I1205 19:27:54.087712 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f6951b78-5bbf-48d5-9edc-efbde0f5e939" containerName="nova-scheduler-scheduler" containerID="cri-o://ca9df55c86c8dc29ae059e20ff584a65412b7ad2dcfd34b54e8efa3be4d07f06" gracePeriod=30 Dec 05 19:27:54 crc kubenswrapper[4828]: I1205 19:27:54.119217 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 19:27:54 crc kubenswrapper[4828]: I1205 19:27:54.119478 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="12374de6-1d67-43ff-8067-319d86b0fe6b" containerName="nova-metadata-log" containerID="cri-o://f50cdcdbf2e67aa74f91a3edeff38274e6dc02a8c744cc4d93f6acd8a94a0d20" gracePeriod=30 Dec 05 19:27:54 crc kubenswrapper[4828]: I1205 19:27:54.119559 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="12374de6-1d67-43ff-8067-319d86b0fe6b" containerName="nova-metadata-metadata" containerID="cri-o://9aaf81bedf963a03454b7b424af81608614ee20ee7343e4d7ebbdac81fc8b7cd" gracePeriod=30 Dec 05 19:27:54 crc kubenswrapper[4828]: I1205 19:27:54.892817 4828 generic.go:334] "Generic (PLEG): container finished" podID="f127a8bd-9835-442a-a12d-7eeae0cf6296" containerID="38fa6d5e90d2b2a54af9f8970a1ad3237af02f6a578c519d5aa78649cb31b9ac" exitCode=143 Dec 05 19:27:54 crc kubenswrapper[4828]: I1205 19:27:54.892908 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f127a8bd-9835-442a-a12d-7eeae0cf6296","Type":"ContainerDied","Data":"38fa6d5e90d2b2a54af9f8970a1ad3237af02f6a578c519d5aa78649cb31b9ac"} Dec 05 19:27:54 crc kubenswrapper[4828]: I1205 19:27:54.896740 4828 generic.go:334] "Generic (PLEG): container finished" podID="12374de6-1d67-43ff-8067-319d86b0fe6b" containerID="f50cdcdbf2e67aa74f91a3edeff38274e6dc02a8c744cc4d93f6acd8a94a0d20" exitCode=143 Dec 05 19:27:54 crc kubenswrapper[4828]: I1205 19:27:54.896779 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"12374de6-1d67-43ff-8067-319d86b0fe6b","Type":"ContainerDied","Data":"f50cdcdbf2e67aa74f91a3edeff38274e6dc02a8c744cc4d93f6acd8a94a0d20"} Dec 05 19:27:56 crc kubenswrapper[4828]: E1205 19:27:56.504912 4828 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6951b78_5bbf_48d5_9edc_efbde0f5e939.slice/crio-conmon-ca9df55c86c8dc29ae059e20ff584a65412b7ad2dcfd34b54e8efa3be4d07f06.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6951b78_5bbf_48d5_9edc_efbde0f5e939.slice/crio-ca9df55c86c8dc29ae059e20ff584a65412b7ad2dcfd34b54e8efa3be4d07f06.scope\": RecentStats: unable to find data in memory cache]" Dec 05 19:27:56 crc kubenswrapper[4828]: I1205 19:27:56.833205 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 19:27:56 crc kubenswrapper[4828]: I1205 19:27:56.921691 4828 generic.go:334] "Generic (PLEG): container finished" podID="f6951b78-5bbf-48d5-9edc-efbde0f5e939" containerID="ca9df55c86c8dc29ae059e20ff584a65412b7ad2dcfd34b54e8efa3be4d07f06" exitCode=0 Dec 05 19:27:56 crc kubenswrapper[4828]: I1205 19:27:56.921741 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f6951b78-5bbf-48d5-9edc-efbde0f5e939","Type":"ContainerDied","Data":"ca9df55c86c8dc29ae059e20ff584a65412b7ad2dcfd34b54e8efa3be4d07f06"} Dec 05 19:27:56 crc kubenswrapper[4828]: I1205 19:27:56.921779 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 19:27:56 crc kubenswrapper[4828]: I1205 19:27:56.921805 4828 scope.go:117] "RemoveContainer" containerID="ca9df55c86c8dc29ae059e20ff584a65412b7ad2dcfd34b54e8efa3be4d07f06" Dec 05 19:27:56 crc kubenswrapper[4828]: I1205 19:27:56.921791 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f6951b78-5bbf-48d5-9edc-efbde0f5e939","Type":"ContainerDied","Data":"3b1ba15bb19f489461c6e81f08c9d93ff6ed397abece84ab8882925742b2f739"} Dec 05 19:27:56 crc kubenswrapper[4828]: I1205 19:27:56.944529 4828 scope.go:117] "RemoveContainer" containerID="ca9df55c86c8dc29ae059e20ff584a65412b7ad2dcfd34b54e8efa3be4d07f06" Dec 05 19:27:56 crc kubenswrapper[4828]: E1205 19:27:56.945044 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca9df55c86c8dc29ae059e20ff584a65412b7ad2dcfd34b54e8efa3be4d07f06\": container with ID starting with ca9df55c86c8dc29ae059e20ff584a65412b7ad2dcfd34b54e8efa3be4d07f06 not found: ID does not exist" containerID="ca9df55c86c8dc29ae059e20ff584a65412b7ad2dcfd34b54e8efa3be4d07f06" Dec 05 19:27:56 crc kubenswrapper[4828]: I1205 19:27:56.945071 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca9df55c86c8dc29ae059e20ff584a65412b7ad2dcfd34b54e8efa3be4d07f06"} err="failed to get container status \"ca9df55c86c8dc29ae059e20ff584a65412b7ad2dcfd34b54e8efa3be4d07f06\": rpc error: code = NotFound desc = could not find container \"ca9df55c86c8dc29ae059e20ff584a65412b7ad2dcfd34b54e8efa3be4d07f06\": container with ID starting with ca9df55c86c8dc29ae059e20ff584a65412b7ad2dcfd34b54e8efa3be4d07f06 not found: ID does not exist" Dec 05 19:27:56 crc kubenswrapper[4828]: I1205 19:27:56.955727 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6951b78-5bbf-48d5-9edc-efbde0f5e939-combined-ca-bundle\") pod \"f6951b78-5bbf-48d5-9edc-efbde0f5e939\" (UID: \"f6951b78-5bbf-48d5-9edc-efbde0f5e939\") " Dec 05 19:27:56 crc kubenswrapper[4828]: I1205 19:27:56.956027 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6951b78-5bbf-48d5-9edc-efbde0f5e939-config-data\") pod \"f6951b78-5bbf-48d5-9edc-efbde0f5e939\" (UID: \"f6951b78-5bbf-48d5-9edc-efbde0f5e939\") " Dec 05 19:27:56 crc kubenswrapper[4828]: I1205 19:27:56.956081 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlm8p\" (UniqueName: \"kubernetes.io/projected/f6951b78-5bbf-48d5-9edc-efbde0f5e939-kube-api-access-dlm8p\") pod \"f6951b78-5bbf-48d5-9edc-efbde0f5e939\" (UID: \"f6951b78-5bbf-48d5-9edc-efbde0f5e939\") " Dec 05 19:27:56 crc kubenswrapper[4828]: I1205 19:27:56.961377 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6951b78-5bbf-48d5-9edc-efbde0f5e939-kube-api-access-dlm8p" (OuterVolumeSpecName: "kube-api-access-dlm8p") pod "f6951b78-5bbf-48d5-9edc-efbde0f5e939" (UID: "f6951b78-5bbf-48d5-9edc-efbde0f5e939"). InnerVolumeSpecName "kube-api-access-dlm8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:27:56 crc kubenswrapper[4828]: I1205 19:27:56.997363 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6951b78-5bbf-48d5-9edc-efbde0f5e939-config-data" (OuterVolumeSpecName: "config-data") pod "f6951b78-5bbf-48d5-9edc-efbde0f5e939" (UID: "f6951b78-5bbf-48d5-9edc-efbde0f5e939"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:56 crc kubenswrapper[4828]: I1205 19:27:56.997706 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6951b78-5bbf-48d5-9edc-efbde0f5e939-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6951b78-5bbf-48d5-9edc-efbde0f5e939" (UID: "f6951b78-5bbf-48d5-9edc-efbde0f5e939"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.058261 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6951b78-5bbf-48d5-9edc-efbde0f5e939-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.058313 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlm8p\" (UniqueName: \"kubernetes.io/projected/f6951b78-5bbf-48d5-9edc-efbde0f5e939-kube-api-access-dlm8p\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.058332 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6951b78-5bbf-48d5-9edc-efbde0f5e939-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.373221 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.386430 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.399355 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 19:27:57 crc kubenswrapper[4828]: E1205 19:27:57.399788 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6951b78-5bbf-48d5-9edc-efbde0f5e939" containerName="nova-scheduler-scheduler" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.399806 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6951b78-5bbf-48d5-9edc-efbde0f5e939" containerName="nova-scheduler-scheduler" Dec 05 19:27:57 crc kubenswrapper[4828]: E1205 19:27:57.399836 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4931033-0f5f-4d93-b809-45da0865ddfa" containerName="nova-manage" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.399843 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4931033-0f5f-4d93-b809-45da0865ddfa" containerName="nova-manage" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.400078 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4931033-0f5f-4d93-b809-45da0865ddfa" containerName="nova-manage" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.400117 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6951b78-5bbf-48d5-9edc-efbde0f5e939" containerName="nova-scheduler-scheduler" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.400796 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.403447 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.409863 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.566518 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6c44e1b-fe99-4645-894d-8f7c89ec0ed2-config-data\") pod \"nova-scheduler-0\" (UID: \"e6c44e1b-fe99-4645-894d-8f7c89ec0ed2\") " pod="openstack/nova-scheduler-0" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.566987 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shcgn\" (UniqueName: \"kubernetes.io/projected/e6c44e1b-fe99-4645-894d-8f7c89ec0ed2-kube-api-access-shcgn\") pod \"nova-scheduler-0\" (UID: \"e6c44e1b-fe99-4645-894d-8f7c89ec0ed2\") " pod="openstack/nova-scheduler-0" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.567191 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6c44e1b-fe99-4645-894d-8f7c89ec0ed2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e6c44e1b-fe99-4645-894d-8f7c89ec0ed2\") " pod="openstack/nova-scheduler-0" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.655299 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.669676 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6c44e1b-fe99-4645-894d-8f7c89ec0ed2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e6c44e1b-fe99-4645-894d-8f7c89ec0ed2\") " pod="openstack/nova-scheduler-0" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.670248 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6c44e1b-fe99-4645-894d-8f7c89ec0ed2-config-data\") pod \"nova-scheduler-0\" (UID: \"e6c44e1b-fe99-4645-894d-8f7c89ec0ed2\") " pod="openstack/nova-scheduler-0" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.670364 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shcgn\" (UniqueName: \"kubernetes.io/projected/e6c44e1b-fe99-4645-894d-8f7c89ec0ed2-kube-api-access-shcgn\") pod \"nova-scheduler-0\" (UID: \"e6c44e1b-fe99-4645-894d-8f7c89ec0ed2\") " pod="openstack/nova-scheduler-0" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.691187 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6c44e1b-fe99-4645-894d-8f7c89ec0ed2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e6c44e1b-fe99-4645-894d-8f7c89ec0ed2\") " pod="openstack/nova-scheduler-0" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.692094 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6c44e1b-fe99-4645-894d-8f7c89ec0ed2-config-data\") pod \"nova-scheduler-0\" (UID: \"e6c44e1b-fe99-4645-894d-8f7c89ec0ed2\") " pod="openstack/nova-scheduler-0" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.695930 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shcgn\" (UniqueName: \"kubernetes.io/projected/e6c44e1b-fe99-4645-894d-8f7c89ec0ed2-kube-api-access-shcgn\") pod \"nova-scheduler-0\" (UID: \"e6c44e1b-fe99-4645-894d-8f7c89ec0ed2\") " pod="openstack/nova-scheduler-0" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.729449 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.777033 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f127a8bd-9835-442a-a12d-7eeae0cf6296-internal-tls-certs\") pod \"f127a8bd-9835-442a-a12d-7eeae0cf6296\" (UID: \"f127a8bd-9835-442a-a12d-7eeae0cf6296\") " Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.777154 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f127a8bd-9835-442a-a12d-7eeae0cf6296-combined-ca-bundle\") pod \"f127a8bd-9835-442a-a12d-7eeae0cf6296\" (UID: \"f127a8bd-9835-442a-a12d-7eeae0cf6296\") " Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.777236 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8nzl\" (UniqueName: \"kubernetes.io/projected/f127a8bd-9835-442a-a12d-7eeae0cf6296-kube-api-access-w8nzl\") pod \"f127a8bd-9835-442a-a12d-7eeae0cf6296\" (UID: \"f127a8bd-9835-442a-a12d-7eeae0cf6296\") " Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.777362 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f127a8bd-9835-442a-a12d-7eeae0cf6296-public-tls-certs\") pod \"f127a8bd-9835-442a-a12d-7eeae0cf6296\" (UID: \"f127a8bd-9835-442a-a12d-7eeae0cf6296\") " Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.777389 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f127a8bd-9835-442a-a12d-7eeae0cf6296-logs\") pod \"f127a8bd-9835-442a-a12d-7eeae0cf6296\" (UID: \"f127a8bd-9835-442a-a12d-7eeae0cf6296\") " Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.777413 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f127a8bd-9835-442a-a12d-7eeae0cf6296-config-data\") pod \"f127a8bd-9835-442a-a12d-7eeae0cf6296\" (UID: \"f127a8bd-9835-442a-a12d-7eeae0cf6296\") " Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.782276 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f127a8bd-9835-442a-a12d-7eeae0cf6296-logs" (OuterVolumeSpecName: "logs") pod "f127a8bd-9835-442a-a12d-7eeae0cf6296" (UID: "f127a8bd-9835-442a-a12d-7eeae0cf6296"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.784206 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f127a8bd-9835-442a-a12d-7eeae0cf6296-kube-api-access-w8nzl" (OuterVolumeSpecName: "kube-api-access-w8nzl") pod "f127a8bd-9835-442a-a12d-7eeae0cf6296" (UID: "f127a8bd-9835-442a-a12d-7eeae0cf6296"). InnerVolumeSpecName "kube-api-access-w8nzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.800702 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.808150 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f127a8bd-9835-442a-a12d-7eeae0cf6296-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f127a8bd-9835-442a-a12d-7eeae0cf6296" (UID: "f127a8bd-9835-442a-a12d-7eeae0cf6296"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.809095 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f127a8bd-9835-442a-a12d-7eeae0cf6296-config-data" (OuterVolumeSpecName: "config-data") pod "f127a8bd-9835-442a-a12d-7eeae0cf6296" (UID: "f127a8bd-9835-442a-a12d-7eeae0cf6296"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.853304 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f127a8bd-9835-442a-a12d-7eeae0cf6296-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f127a8bd-9835-442a-a12d-7eeae0cf6296" (UID: "f127a8bd-9835-442a-a12d-7eeae0cf6296"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.880641 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/12374de6-1d67-43ff-8067-319d86b0fe6b-nova-metadata-tls-certs\") pod \"12374de6-1d67-43ff-8067-319d86b0fe6b\" (UID: \"12374de6-1d67-43ff-8067-319d86b0fe6b\") " Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.880688 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcmdc\" (UniqueName: \"kubernetes.io/projected/12374de6-1d67-43ff-8067-319d86b0fe6b-kube-api-access-pcmdc\") pod \"12374de6-1d67-43ff-8067-319d86b0fe6b\" (UID: \"12374de6-1d67-43ff-8067-319d86b0fe6b\") " Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.880717 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12374de6-1d67-43ff-8067-319d86b0fe6b-combined-ca-bundle\") pod \"12374de6-1d67-43ff-8067-319d86b0fe6b\" (UID: \"12374de6-1d67-43ff-8067-319d86b0fe6b\") " Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.880805 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12374de6-1d67-43ff-8067-319d86b0fe6b-logs\") pod \"12374de6-1d67-43ff-8067-319d86b0fe6b\" (UID: \"12374de6-1d67-43ff-8067-319d86b0fe6b\") " Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.880948 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12374de6-1d67-43ff-8067-319d86b0fe6b-config-data\") pod \"12374de6-1d67-43ff-8067-319d86b0fe6b\" (UID: \"12374de6-1d67-43ff-8067-319d86b0fe6b\") " Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.881463 4828 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f127a8bd-9835-442a-a12d-7eeae0cf6296-logs\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.881480 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f127a8bd-9835-442a-a12d-7eeae0cf6296-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.881489 4828 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f127a8bd-9835-442a-a12d-7eeae0cf6296-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.881501 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f127a8bd-9835-442a-a12d-7eeae0cf6296-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.881512 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8nzl\" (UniqueName: \"kubernetes.io/projected/f127a8bd-9835-442a-a12d-7eeae0cf6296-kube-api-access-w8nzl\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.883356 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12374de6-1d67-43ff-8067-319d86b0fe6b-logs" (OuterVolumeSpecName: "logs") pod "12374de6-1d67-43ff-8067-319d86b0fe6b" (UID: "12374de6-1d67-43ff-8067-319d86b0fe6b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.883971 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12374de6-1d67-43ff-8067-319d86b0fe6b-kube-api-access-pcmdc" (OuterVolumeSpecName: "kube-api-access-pcmdc") pod "12374de6-1d67-43ff-8067-319d86b0fe6b" (UID: "12374de6-1d67-43ff-8067-319d86b0fe6b"). InnerVolumeSpecName "kube-api-access-pcmdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.893930 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f127a8bd-9835-442a-a12d-7eeae0cf6296-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f127a8bd-9835-442a-a12d-7eeae0cf6296" (UID: "f127a8bd-9835-442a-a12d-7eeae0cf6296"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.909790 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12374de6-1d67-43ff-8067-319d86b0fe6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12374de6-1d67-43ff-8067-319d86b0fe6b" (UID: "12374de6-1d67-43ff-8067-319d86b0fe6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.910000 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12374de6-1d67-43ff-8067-319d86b0fe6b-config-data" (OuterVolumeSpecName: "config-data") pod "12374de6-1d67-43ff-8067-319d86b0fe6b" (UID: "12374de6-1d67-43ff-8067-319d86b0fe6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.934333 4828 generic.go:334] "Generic (PLEG): container finished" podID="f127a8bd-9835-442a-a12d-7eeae0cf6296" containerID="054cf6d7dd83863ebb6653649dd3312c6acf9bb5bcd1e08de24ff47767a47454" exitCode=0 Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.934414 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f127a8bd-9835-442a-a12d-7eeae0cf6296","Type":"ContainerDied","Data":"054cf6d7dd83863ebb6653649dd3312c6acf9bb5bcd1e08de24ff47767a47454"} Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.934445 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f127a8bd-9835-442a-a12d-7eeae0cf6296","Type":"ContainerDied","Data":"dd4c31ce977ce3c0ec13977a08d6714b8080cec631a30ae66d70edc9060a6490"} Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.934465 4828 scope.go:117] "RemoveContainer" containerID="054cf6d7dd83863ebb6653649dd3312c6acf9bb5bcd1e08de24ff47767a47454" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.934587 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.943164 4828 generic.go:334] "Generic (PLEG): container finished" podID="12374de6-1d67-43ff-8067-319d86b0fe6b" containerID="9aaf81bedf963a03454b7b424af81608614ee20ee7343e4d7ebbdac81fc8b7cd" exitCode=0 Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.943210 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"12374de6-1d67-43ff-8067-319d86b0fe6b","Type":"ContainerDied","Data":"9aaf81bedf963a03454b7b424af81608614ee20ee7343e4d7ebbdac81fc8b7cd"} Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.943239 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"12374de6-1d67-43ff-8067-319d86b0fe6b","Type":"ContainerDied","Data":"e8d048c21d40895d54a5f1793b5911b74ddd6f985ebb6fa78afaf42e533c2d73"} Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.943295 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.964306 4828 scope.go:117] "RemoveContainer" containerID="38fa6d5e90d2b2a54af9f8970a1ad3237af02f6a578c519d5aa78649cb31b9ac" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.964396 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12374de6-1d67-43ff-8067-319d86b0fe6b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "12374de6-1d67-43ff-8067-319d86b0fe6b" (UID: "12374de6-1d67-43ff-8067-319d86b0fe6b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.981794 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.983002 4828 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f127a8bd-9835-442a-a12d-7eeae0cf6296-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.983043 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12374de6-1d67-43ff-8067-319d86b0fe6b-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.983055 4828 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/12374de6-1d67-43ff-8067-319d86b0fe6b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.983067 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcmdc\" (UniqueName: \"kubernetes.io/projected/12374de6-1d67-43ff-8067-319d86b0fe6b-kube-api-access-pcmdc\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.983078 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12374de6-1d67-43ff-8067-319d86b0fe6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.983089 4828 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12374de6-1d67-43ff-8067-319d86b0fe6b-logs\") on node \"crc\" DevicePath \"\"" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.990560 4828 scope.go:117] "RemoveContainer" containerID="054cf6d7dd83863ebb6653649dd3312c6acf9bb5bcd1e08de24ff47767a47454" Dec 05 19:27:57 crc kubenswrapper[4828]: E1205 19:27:57.991069 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"054cf6d7dd83863ebb6653649dd3312c6acf9bb5bcd1e08de24ff47767a47454\": container with ID starting with 054cf6d7dd83863ebb6653649dd3312c6acf9bb5bcd1e08de24ff47767a47454 not found: ID does not exist" containerID="054cf6d7dd83863ebb6653649dd3312c6acf9bb5bcd1e08de24ff47767a47454" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.991102 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"054cf6d7dd83863ebb6653649dd3312c6acf9bb5bcd1e08de24ff47767a47454"} err="failed to get container status \"054cf6d7dd83863ebb6653649dd3312c6acf9bb5bcd1e08de24ff47767a47454\": rpc error: code = NotFound desc = could not find container \"054cf6d7dd83863ebb6653649dd3312c6acf9bb5bcd1e08de24ff47767a47454\": container with ID starting with 054cf6d7dd83863ebb6653649dd3312c6acf9bb5bcd1e08de24ff47767a47454 not found: ID does not exist" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.991123 4828 scope.go:117] "RemoveContainer" containerID="38fa6d5e90d2b2a54af9f8970a1ad3237af02f6a578c519d5aa78649cb31b9ac" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.991302 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 05 19:27:57 crc kubenswrapper[4828]: E1205 19:27:57.995254 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38fa6d5e90d2b2a54af9f8970a1ad3237af02f6a578c519d5aa78649cb31b9ac\": container with ID starting with 38fa6d5e90d2b2a54af9f8970a1ad3237af02f6a578c519d5aa78649cb31b9ac not found: ID does not exist" containerID="38fa6d5e90d2b2a54af9f8970a1ad3237af02f6a578c519d5aa78649cb31b9ac" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.995311 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38fa6d5e90d2b2a54af9f8970a1ad3237af02f6a578c519d5aa78649cb31b9ac"} err="failed to get container status \"38fa6d5e90d2b2a54af9f8970a1ad3237af02f6a578c519d5aa78649cb31b9ac\": rpc error: code = NotFound desc = could not find container \"38fa6d5e90d2b2a54af9f8970a1ad3237af02f6a578c519d5aa78649cb31b9ac\": container with ID starting with 38fa6d5e90d2b2a54af9f8970a1ad3237af02f6a578c519d5aa78649cb31b9ac not found: ID does not exist" Dec 05 19:27:57 crc kubenswrapper[4828]: I1205 19:27:57.995343 4828 scope.go:117] "RemoveContainer" containerID="9aaf81bedf963a03454b7b424af81608614ee20ee7343e4d7ebbdac81fc8b7cd" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.031271 4828 scope.go:117] "RemoveContainer" containerID="f50cdcdbf2e67aa74f91a3edeff38274e6dc02a8c744cc4d93f6acd8a94a0d20" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.047847 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 05 19:27:58 crc kubenswrapper[4828]: E1205 19:27:58.048799 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12374de6-1d67-43ff-8067-319d86b0fe6b" containerName="nova-metadata-metadata" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.048818 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="12374de6-1d67-43ff-8067-319d86b0fe6b" containerName="nova-metadata-metadata" Dec 05 19:27:58 crc kubenswrapper[4828]: E1205 19:27:58.048898 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12374de6-1d67-43ff-8067-319d86b0fe6b" containerName="nova-metadata-log" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.048908 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="12374de6-1d67-43ff-8067-319d86b0fe6b" containerName="nova-metadata-log" Dec 05 19:27:58 crc kubenswrapper[4828]: E1205 19:27:58.048960 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f127a8bd-9835-442a-a12d-7eeae0cf6296" containerName="nova-api-api" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.048973 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f127a8bd-9835-442a-a12d-7eeae0cf6296" containerName="nova-api-api" Dec 05 19:27:58 crc kubenswrapper[4828]: E1205 19:27:58.048992 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f127a8bd-9835-442a-a12d-7eeae0cf6296" containerName="nova-api-log" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.049006 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f127a8bd-9835-442a-a12d-7eeae0cf6296" containerName="nova-api-log" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.049474 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="f127a8bd-9835-442a-a12d-7eeae0cf6296" containerName="nova-api-log" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.049506 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="f127a8bd-9835-442a-a12d-7eeae0cf6296" containerName="nova-api-api" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.049521 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="12374de6-1d67-43ff-8067-319d86b0fe6b" containerName="nova-metadata-metadata" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.049564 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="12374de6-1d67-43ff-8067-319d86b0fe6b" containerName="nova-metadata-log" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.052512 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.054755 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.054982 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.057233 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.060659 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.062360 4828 scope.go:117] "RemoveContainer" containerID="9aaf81bedf963a03454b7b424af81608614ee20ee7343e4d7ebbdac81fc8b7cd" Dec 05 19:27:58 crc kubenswrapper[4828]: E1205 19:27:58.072087 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9aaf81bedf963a03454b7b424af81608614ee20ee7343e4d7ebbdac81fc8b7cd\": container with ID starting with 9aaf81bedf963a03454b7b424af81608614ee20ee7343e4d7ebbdac81fc8b7cd not found: ID does not exist" containerID="9aaf81bedf963a03454b7b424af81608614ee20ee7343e4d7ebbdac81fc8b7cd" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.072122 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aaf81bedf963a03454b7b424af81608614ee20ee7343e4d7ebbdac81fc8b7cd"} err="failed to get container status \"9aaf81bedf963a03454b7b424af81608614ee20ee7343e4d7ebbdac81fc8b7cd\": rpc error: code = NotFound desc = could not find container \"9aaf81bedf963a03454b7b424af81608614ee20ee7343e4d7ebbdac81fc8b7cd\": container with ID starting with 9aaf81bedf963a03454b7b424af81608614ee20ee7343e4d7ebbdac81fc8b7cd not found: ID does not exist" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.072146 4828 scope.go:117] "RemoveContainer" containerID="f50cdcdbf2e67aa74f91a3edeff38274e6dc02a8c744cc4d93f6acd8a94a0d20" Dec 05 19:27:58 crc kubenswrapper[4828]: E1205 19:27:58.072431 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f50cdcdbf2e67aa74f91a3edeff38274e6dc02a8c744cc4d93f6acd8a94a0d20\": container with ID starting with f50cdcdbf2e67aa74f91a3edeff38274e6dc02a8c744cc4d93f6acd8a94a0d20 not found: ID does not exist" containerID="f50cdcdbf2e67aa74f91a3edeff38274e6dc02a8c744cc4d93f6acd8a94a0d20" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.072470 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f50cdcdbf2e67aa74f91a3edeff38274e6dc02a8c744cc4d93f6acd8a94a0d20"} err="failed to get container status \"f50cdcdbf2e67aa74f91a3edeff38274e6dc02a8c744cc4d93f6acd8a94a0d20\": rpc error: code = NotFound desc = could not find container \"f50cdcdbf2e67aa74f91a3edeff38274e6dc02a8c744cc4d93f6acd8a94a0d20\": container with ID starting with f50cdcdbf2e67aa74f91a3edeff38274e6dc02a8c744cc4d93f6acd8a94a0d20 not found: ID does not exist" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.189664 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8beb365-61f2-42bf-be67-af226900e81c-public-tls-certs\") pod \"nova-api-0\" (UID: \"e8beb365-61f2-42bf-be67-af226900e81c\") " pod="openstack/nova-api-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.189728 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcwrc\" (UniqueName: \"kubernetes.io/projected/e8beb365-61f2-42bf-be67-af226900e81c-kube-api-access-fcwrc\") pod \"nova-api-0\" (UID: \"e8beb365-61f2-42bf-be67-af226900e81c\") " pod="openstack/nova-api-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.189758 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8beb365-61f2-42bf-be67-af226900e81c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e8beb365-61f2-42bf-be67-af226900e81c\") " pod="openstack/nova-api-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.189793 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8beb365-61f2-42bf-be67-af226900e81c-logs\") pod \"nova-api-0\" (UID: \"e8beb365-61f2-42bf-be67-af226900e81c\") " pod="openstack/nova-api-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.189878 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8beb365-61f2-42bf-be67-af226900e81c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e8beb365-61f2-42bf-be67-af226900e81c\") " pod="openstack/nova-api-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.190175 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8beb365-61f2-42bf-be67-af226900e81c-config-data\") pod \"nova-api-0\" (UID: \"e8beb365-61f2-42bf-be67-af226900e81c\") " pod="openstack/nova-api-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.227480 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 19:27:58 crc kubenswrapper[4828]: W1205 19:27:58.231590 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6c44e1b_fe99_4645_894d_8f7c89ec0ed2.slice/crio-d92d58f6d53925bef472e31d70ed05cd79330185f438c32bdd745db1cbd30304 WatchSource:0}: Error finding container d92d58f6d53925bef472e31d70ed05cd79330185f438c32bdd745db1cbd30304: Status 404 returned error can't find the container with id d92d58f6d53925bef472e31d70ed05cd79330185f438c32bdd745db1cbd30304 Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.288868 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.292113 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8beb365-61f2-42bf-be67-af226900e81c-public-tls-certs\") pod \"nova-api-0\" (UID: \"e8beb365-61f2-42bf-be67-af226900e81c\") " pod="openstack/nova-api-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.292172 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcwrc\" (UniqueName: \"kubernetes.io/projected/e8beb365-61f2-42bf-be67-af226900e81c-kube-api-access-fcwrc\") pod \"nova-api-0\" (UID: \"e8beb365-61f2-42bf-be67-af226900e81c\") " pod="openstack/nova-api-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.292193 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8beb365-61f2-42bf-be67-af226900e81c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e8beb365-61f2-42bf-be67-af226900e81c\") " pod="openstack/nova-api-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.292224 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8beb365-61f2-42bf-be67-af226900e81c-logs\") pod \"nova-api-0\" (UID: \"e8beb365-61f2-42bf-be67-af226900e81c\") " pod="openstack/nova-api-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.292247 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8beb365-61f2-42bf-be67-af226900e81c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e8beb365-61f2-42bf-be67-af226900e81c\") " pod="openstack/nova-api-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.292270 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8beb365-61f2-42bf-be67-af226900e81c-config-data\") pod \"nova-api-0\" (UID: \"e8beb365-61f2-42bf-be67-af226900e81c\") " pod="openstack/nova-api-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.293070 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8beb365-61f2-42bf-be67-af226900e81c-logs\") pod \"nova-api-0\" (UID: \"e8beb365-61f2-42bf-be67-af226900e81c\") " pod="openstack/nova-api-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.298380 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8beb365-61f2-42bf-be67-af226900e81c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e8beb365-61f2-42bf-be67-af226900e81c\") " pod="openstack/nova-api-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.301281 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8beb365-61f2-42bf-be67-af226900e81c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e8beb365-61f2-42bf-be67-af226900e81c\") " pod="openstack/nova-api-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.303387 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8beb365-61f2-42bf-be67-af226900e81c-config-data\") pod \"nova-api-0\" (UID: \"e8beb365-61f2-42bf-be67-af226900e81c\") " pod="openstack/nova-api-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.303844 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8beb365-61f2-42bf-be67-af226900e81c-public-tls-certs\") pod \"nova-api-0\" (UID: \"e8beb365-61f2-42bf-be67-af226900e81c\") " pod="openstack/nova-api-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.307877 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.316763 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.318726 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.320031 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcwrc\" (UniqueName: \"kubernetes.io/projected/e8beb365-61f2-42bf-be67-af226900e81c-kube-api-access-fcwrc\") pod \"nova-api-0\" (UID: \"e8beb365-61f2-42bf-be67-af226900e81c\") " pod="openstack/nova-api-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.327938 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.328364 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.342803 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.394928 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0bf5ea5-86ef-400d-a033-4bb5c31f61df-logs\") pod \"nova-metadata-0\" (UID: \"d0bf5ea5-86ef-400d-a033-4bb5c31f61df\") " pod="openstack/nova-metadata-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.395072 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0bf5ea5-86ef-400d-a033-4bb5c31f61df-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d0bf5ea5-86ef-400d-a033-4bb5c31f61df\") " pod="openstack/nova-metadata-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.395103 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0bf5ea5-86ef-400d-a033-4bb5c31f61df-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d0bf5ea5-86ef-400d-a033-4bb5c31f61df\") " pod="openstack/nova-metadata-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.395144 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fhmk\" (UniqueName: \"kubernetes.io/projected/d0bf5ea5-86ef-400d-a033-4bb5c31f61df-kube-api-access-7fhmk\") pod \"nova-metadata-0\" (UID: \"d0bf5ea5-86ef-400d-a033-4bb5c31f61df\") " pod="openstack/nova-metadata-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.395246 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0bf5ea5-86ef-400d-a033-4bb5c31f61df-config-data\") pod \"nova-metadata-0\" (UID: \"d0bf5ea5-86ef-400d-a033-4bb5c31f61df\") " pod="openstack/nova-metadata-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.415504 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.458719 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12374de6-1d67-43ff-8067-319d86b0fe6b" path="/var/lib/kubelet/pods/12374de6-1d67-43ff-8067-319d86b0fe6b/volumes" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.459519 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f127a8bd-9835-442a-a12d-7eeae0cf6296" path="/var/lib/kubelet/pods/f127a8bd-9835-442a-a12d-7eeae0cf6296/volumes" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.460784 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6951b78-5bbf-48d5-9edc-efbde0f5e939" path="/var/lib/kubelet/pods/f6951b78-5bbf-48d5-9edc-efbde0f5e939/volumes" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.497473 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0bf5ea5-86ef-400d-a033-4bb5c31f61df-logs\") pod \"nova-metadata-0\" (UID: \"d0bf5ea5-86ef-400d-a033-4bb5c31f61df\") " pod="openstack/nova-metadata-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.497782 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0bf5ea5-86ef-400d-a033-4bb5c31f61df-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d0bf5ea5-86ef-400d-a033-4bb5c31f61df\") " pod="openstack/nova-metadata-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.497838 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0bf5ea5-86ef-400d-a033-4bb5c31f61df-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d0bf5ea5-86ef-400d-a033-4bb5c31f61df\") " pod="openstack/nova-metadata-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.497873 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fhmk\" (UniqueName: \"kubernetes.io/projected/d0bf5ea5-86ef-400d-a033-4bb5c31f61df-kube-api-access-7fhmk\") pod \"nova-metadata-0\" (UID: \"d0bf5ea5-86ef-400d-a033-4bb5c31f61df\") " pod="openstack/nova-metadata-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.497911 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0bf5ea5-86ef-400d-a033-4bb5c31f61df-config-data\") pod \"nova-metadata-0\" (UID: \"d0bf5ea5-86ef-400d-a033-4bb5c31f61df\") " pod="openstack/nova-metadata-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.498155 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0bf5ea5-86ef-400d-a033-4bb5c31f61df-logs\") pod \"nova-metadata-0\" (UID: \"d0bf5ea5-86ef-400d-a033-4bb5c31f61df\") " pod="openstack/nova-metadata-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.503467 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0bf5ea5-86ef-400d-a033-4bb5c31f61df-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d0bf5ea5-86ef-400d-a033-4bb5c31f61df\") " pod="openstack/nova-metadata-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.503670 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0bf5ea5-86ef-400d-a033-4bb5c31f61df-config-data\") pod \"nova-metadata-0\" (UID: \"d0bf5ea5-86ef-400d-a033-4bb5c31f61df\") " pod="openstack/nova-metadata-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.505761 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0bf5ea5-86ef-400d-a033-4bb5c31f61df-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d0bf5ea5-86ef-400d-a033-4bb5c31f61df\") " pod="openstack/nova-metadata-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.524025 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fhmk\" (UniqueName: \"kubernetes.io/projected/d0bf5ea5-86ef-400d-a033-4bb5c31f61df-kube-api-access-7fhmk\") pod \"nova-metadata-0\" (UID: \"d0bf5ea5-86ef-400d-a033-4bb5c31f61df\") " pod="openstack/nova-metadata-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.641665 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.886556 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.955415 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8beb365-61f2-42bf-be67-af226900e81c","Type":"ContainerStarted","Data":"12d6e9117368b7473b648cb144464e2bb845dc9199b2f880aef3ace9f480aabf"} Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.958371 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e6c44e1b-fe99-4645-894d-8f7c89ec0ed2","Type":"ContainerStarted","Data":"77333474ff396f84713ec29c7f85b73ec2e5283c194fa7eba2ee7e133a4b9572"} Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.958420 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e6c44e1b-fe99-4645-894d-8f7c89ec0ed2","Type":"ContainerStarted","Data":"d92d58f6d53925bef472e31d70ed05cd79330185f438c32bdd745db1cbd30304"} Dec 05 19:27:58 crc kubenswrapper[4828]: I1205 19:27:58.976504 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.976482619 podStartE2EDuration="1.976482619s" podCreationTimestamp="2025-12-05 19:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:27:58.974147965 +0000 UTC m=+1456.869370271" watchObservedRunningTime="2025-12-05 19:27:58.976482619 +0000 UTC m=+1456.871704925" Dec 05 19:27:59 crc kubenswrapper[4828]: I1205 19:27:59.140783 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 19:27:59 crc kubenswrapper[4828]: W1205 19:27:59.146359 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0bf5ea5_86ef_400d_a033_4bb5c31f61df.slice/crio-a2d21efdedf2cb0c645379487ec965a77056792f6aa94241ff150294faf64a25 WatchSource:0}: Error finding container a2d21efdedf2cb0c645379487ec965a77056792f6aa94241ff150294faf64a25: Status 404 returned error can't find the container with id a2d21efdedf2cb0c645379487ec965a77056792f6aa94241ff150294faf64a25 Dec 05 19:27:59 crc kubenswrapper[4828]: I1205 19:27:59.971181 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8beb365-61f2-42bf-be67-af226900e81c","Type":"ContainerStarted","Data":"8b7926569af2de120ba620c7d7c3944c8b2773ffd8e1ec5c7fc7e644a92f97d7"} Dec 05 19:27:59 crc kubenswrapper[4828]: I1205 19:27:59.971489 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8beb365-61f2-42bf-be67-af226900e81c","Type":"ContainerStarted","Data":"0a63d0a53ba04d27ba9c9de42374f95b5431f71348e1dda588295d343eedaf67"} Dec 05 19:27:59 crc kubenswrapper[4828]: I1205 19:27:59.974130 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0bf5ea5-86ef-400d-a033-4bb5c31f61df","Type":"ContainerStarted","Data":"514b569a7d4f9a9515b02f9b48846cd74c76528bb50d2b2d1883976820cdba66"} Dec 05 19:27:59 crc kubenswrapper[4828]: I1205 19:27:59.974162 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0bf5ea5-86ef-400d-a033-4bb5c31f61df","Type":"ContainerStarted","Data":"e1318e5be1d31cf21bb90bfbb85e29372d072cc0dc70eebf3691c8b2d6de0e1a"} Dec 05 19:27:59 crc kubenswrapper[4828]: I1205 19:27:59.974178 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0bf5ea5-86ef-400d-a033-4bb5c31f61df","Type":"ContainerStarted","Data":"a2d21efdedf2cb0c645379487ec965a77056792f6aa94241ff150294faf64a25"} Dec 05 19:28:00 crc kubenswrapper[4828]: I1205 19:28:00.001374 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.001351283 podStartE2EDuration="3.001351283s" podCreationTimestamp="2025-12-05 19:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:27:59.988881136 +0000 UTC m=+1457.884103462" watchObservedRunningTime="2025-12-05 19:28:00.001351283 +0000 UTC m=+1457.896573589" Dec 05 19:28:00 crc kubenswrapper[4828]: I1205 19:28:00.009558 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.009536443 podStartE2EDuration="2.009536443s" podCreationTimestamp="2025-12-05 19:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:28:00.006691666 +0000 UTC m=+1457.901913972" watchObservedRunningTime="2025-12-05 19:28:00.009536443 +0000 UTC m=+1457.904758759" Dec 05 19:28:02 crc kubenswrapper[4828]: I1205 19:28:02.731030 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 05 19:28:03 crc kubenswrapper[4828]: I1205 19:28:03.642190 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 19:28:03 crc kubenswrapper[4828]: I1205 19:28:03.642495 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 19:28:04 crc kubenswrapper[4828]: I1205 19:28:04.187374 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 05 19:28:07 crc kubenswrapper[4828]: I1205 19:28:07.730767 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 05 19:28:07 crc kubenswrapper[4828]: I1205 19:28:07.766691 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 05 19:28:08 crc kubenswrapper[4828]: I1205 19:28:08.074534 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 05 19:28:08 crc kubenswrapper[4828]: I1205 19:28:08.415639 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 19:28:08 crc kubenswrapper[4828]: I1205 19:28:08.415693 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 19:28:08 crc kubenswrapper[4828]: I1205 19:28:08.641884 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 05 19:28:08 crc kubenswrapper[4828]: I1205 19:28:08.642387 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 05 19:28:09 crc kubenswrapper[4828]: I1205 19:28:09.435027 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e8beb365-61f2-42bf-be67-af226900e81c" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 19:28:09 crc kubenswrapper[4828]: I1205 19:28:09.435059 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e8beb365-61f2-42bf-be67-af226900e81c" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 19:28:09 crc kubenswrapper[4828]: I1205 19:28:09.655009 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d0bf5ea5-86ef-400d-a033-4bb5c31f61df" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 19:28:09 crc kubenswrapper[4828]: I1205 19:28:09.655002 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d0bf5ea5-86ef-400d-a033-4bb5c31f61df" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 19:28:18 crc kubenswrapper[4828]: I1205 19:28:18.424091 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 05 19:28:18 crc kubenswrapper[4828]: I1205 19:28:18.424833 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 05 19:28:18 crc kubenswrapper[4828]: I1205 19:28:18.426347 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 05 19:28:18 crc kubenswrapper[4828]: I1205 19:28:18.430703 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 05 19:28:18 crc kubenswrapper[4828]: I1205 19:28:18.647620 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 05 19:28:18 crc kubenswrapper[4828]: I1205 19:28:18.647676 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 05 19:28:18 crc kubenswrapper[4828]: I1205 19:28:18.652047 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 05 19:28:18 crc kubenswrapper[4828]: I1205 19:28:18.653173 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 05 19:28:19 crc kubenswrapper[4828]: I1205 19:28:19.163454 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 05 19:28:19 crc kubenswrapper[4828]: I1205 19:28:19.169450 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 05 19:28:27 crc kubenswrapper[4828]: I1205 19:28:27.299165 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 19:28:28 crc kubenswrapper[4828]: I1205 19:28:28.318625 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 19:28:31 crc kubenswrapper[4828]: I1205 19:28:31.836641 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="e21a851c-5179-4365-8e72-5dea16be90cc" containerName="rabbitmq" containerID="cri-o://dc1dc77b3357801ba19f4888d4499f413ec8a380f40145c8bb2cec1a9cbb5018" gracePeriod=604796 Dec 05 19:28:32 crc kubenswrapper[4828]: I1205 19:28:32.309233 4828 generic.go:334] "Generic (PLEG): container finished" podID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" containerID="d000229fa1db508cef366e145d044d5816652c2a9c5bba1cd918b2052aa0438a" exitCode=1 Dec 05 19:28:32 crc kubenswrapper[4828]: I1205 19:28:32.309281 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" event={"ID":"03c4fc5d-6be1-47b4-9c39-7bb86046dafd","Type":"ContainerDied","Data":"d000229fa1db508cef366e145d044d5816652c2a9c5bba1cd918b2052aa0438a"} Dec 05 19:28:32 crc kubenswrapper[4828]: I1205 19:28:32.309316 4828 scope.go:117] "RemoveContainer" containerID="430af8e018b4db94e5fbc1658ab5c48af8bdcbbed4d9e9f4a8b1c4d49b774c99" Dec 05 19:28:32 crc kubenswrapper[4828]: I1205 19:28:32.310357 4828 scope.go:117] "RemoveContainer" containerID="d000229fa1db508cef366e145d044d5816652c2a9c5bba1cd918b2052aa0438a" Dec 05 19:28:32 crc kubenswrapper[4828]: E1205 19:28:32.311107 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:28:32 crc kubenswrapper[4828]: I1205 19:28:32.629599 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="50db8d67-b1c6-4165-a526-8149092660ed" containerName="rabbitmq" containerID="cri-o://faa047bd8fabc68e7f3380ac9d548802a2ae382000c17bd0135f658ee46ff4ae" gracePeriod=604796 Dec 05 19:28:33 crc kubenswrapper[4828]: I1205 19:28:33.050491 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="e21a851c-5179-4365-8e72-5dea16be90cc" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Dec 05 19:28:33 crc kubenswrapper[4828]: I1205 19:28:33.347625 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="50db8d67-b1c6-4165-a526-8149092660ed" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Dec 05 19:28:35 crc kubenswrapper[4828]: I1205 19:28:35.118290 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:28:35 crc kubenswrapper[4828]: I1205 19:28:35.119695 4828 scope.go:117] "RemoveContainer" containerID="d000229fa1db508cef366e145d044d5816652c2a9c5bba1cd918b2052aa0438a" Dec 05 19:28:35 crc kubenswrapper[4828]: E1205 19:28:35.120312 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:28:35 crc kubenswrapper[4828]: I1205 19:28:35.259991 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:28:35 crc kubenswrapper[4828]: I1205 19:28:35.260057 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.365026 4828 generic.go:334] "Generic (PLEG): container finished" podID="e21a851c-5179-4365-8e72-5dea16be90cc" containerID="dc1dc77b3357801ba19f4888d4499f413ec8a380f40145c8bb2cec1a9cbb5018" exitCode=0 Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.365115 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e21a851c-5179-4365-8e72-5dea16be90cc","Type":"ContainerDied","Data":"dc1dc77b3357801ba19f4888d4499f413ec8a380f40145c8bb2cec1a9cbb5018"} Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.367303 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e21a851c-5179-4365-8e72-5dea16be90cc","Type":"ContainerDied","Data":"9d11b726f1b9f69adb93085f723671e565bce6d95b184041aae26d57b88c2fe7"} Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.367433 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d11b726f1b9f69adb93085f723671e565bce6d95b184041aae26d57b88c2fe7" Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.409815 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.463764 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e21a851c-5179-4365-8e72-5dea16be90cc-pod-info\") pod \"e21a851c-5179-4365-8e72-5dea16be90cc\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.463851 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e21a851c-5179-4365-8e72-5dea16be90cc-rabbitmq-erlang-cookie\") pod \"e21a851c-5179-4365-8e72-5dea16be90cc\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.463918 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e21a851c-5179-4365-8e72-5dea16be90cc-erlang-cookie-secret\") pod \"e21a851c-5179-4365-8e72-5dea16be90cc\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.463990 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e21a851c-5179-4365-8e72-5dea16be90cc-config-data\") pod \"e21a851c-5179-4365-8e72-5dea16be90cc\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.464013 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e21a851c-5179-4365-8e72-5dea16be90cc-plugins-conf\") pod \"e21a851c-5179-4365-8e72-5dea16be90cc\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.464059 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"e21a851c-5179-4365-8e72-5dea16be90cc\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.464150 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e21a851c-5179-4365-8e72-5dea16be90cc-rabbitmq-confd\") pod \"e21a851c-5179-4365-8e72-5dea16be90cc\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.464215 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e21a851c-5179-4365-8e72-5dea16be90cc-rabbitmq-plugins\") pod \"e21a851c-5179-4365-8e72-5dea16be90cc\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.464261 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqjv6\" (UniqueName: \"kubernetes.io/projected/e21a851c-5179-4365-8e72-5dea16be90cc-kube-api-access-pqjv6\") pod \"e21a851c-5179-4365-8e72-5dea16be90cc\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.464364 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e21a851c-5179-4365-8e72-5dea16be90cc-server-conf\") pod \"e21a851c-5179-4365-8e72-5dea16be90cc\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.464406 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e21a851c-5179-4365-8e72-5dea16be90cc-rabbitmq-tls\") pod \"e21a851c-5179-4365-8e72-5dea16be90cc\" (UID: \"e21a851c-5179-4365-8e72-5dea16be90cc\") " Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.465500 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e21a851c-5179-4365-8e72-5dea16be90cc-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e21a851c-5179-4365-8e72-5dea16be90cc" (UID: "e21a851c-5179-4365-8e72-5dea16be90cc"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.470783 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e21a851c-5179-4365-8e72-5dea16be90cc-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e21a851c-5179-4365-8e72-5dea16be90cc" (UID: "e21a851c-5179-4365-8e72-5dea16be90cc"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.471150 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e21a851c-5179-4365-8e72-5dea16be90cc-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e21a851c-5179-4365-8e72-5dea16be90cc" (UID: "e21a851c-5179-4365-8e72-5dea16be90cc"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.471519 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e21a851c-5179-4365-8e72-5dea16be90cc-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e21a851c-5179-4365-8e72-5dea16be90cc" (UID: "e21a851c-5179-4365-8e72-5dea16be90cc"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.472686 4828 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e21a851c-5179-4365-8e72-5dea16be90cc-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.472709 4828 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e21a851c-5179-4365-8e72-5dea16be90cc-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.472725 4828 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e21a851c-5179-4365-8e72-5dea16be90cc-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.472740 4828 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e21a851c-5179-4365-8e72-5dea16be90cc-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.474367 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e21a851c-5179-4365-8e72-5dea16be90cc-kube-api-access-pqjv6" (OuterVolumeSpecName: "kube-api-access-pqjv6") pod "e21a851c-5179-4365-8e72-5dea16be90cc" (UID: "e21a851c-5179-4365-8e72-5dea16be90cc"). InnerVolumeSpecName "kube-api-access-pqjv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.558586 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "e21a851c-5179-4365-8e72-5dea16be90cc" (UID: "e21a851c-5179-4365-8e72-5dea16be90cc"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.498588 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e21a851c-5179-4365-8e72-5dea16be90cc-pod-info" (OuterVolumeSpecName: "pod-info") pod "e21a851c-5179-4365-8e72-5dea16be90cc" (UID: "e21a851c-5179-4365-8e72-5dea16be90cc"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.570938 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e21a851c-5179-4365-8e72-5dea16be90cc-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e21a851c-5179-4365-8e72-5dea16be90cc" (UID: "e21a851c-5179-4365-8e72-5dea16be90cc"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.578698 4828 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.578772 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqjv6\" (UniqueName: \"kubernetes.io/projected/e21a851c-5179-4365-8e72-5dea16be90cc-kube-api-access-pqjv6\") on node \"crc\" DevicePath \"\"" Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.578786 4828 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e21a851c-5179-4365-8e72-5dea16be90cc-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.578795 4828 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e21a851c-5179-4365-8e72-5dea16be90cc-pod-info\") on node \"crc\" DevicePath \"\"" Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.609638 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e21a851c-5179-4365-8e72-5dea16be90cc-config-data" (OuterVolumeSpecName: "config-data") pod "e21a851c-5179-4365-8e72-5dea16be90cc" (UID: "e21a851c-5179-4365-8e72-5dea16be90cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.622879 4828 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.630073 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e21a851c-5179-4365-8e72-5dea16be90cc-server-conf" (OuterVolumeSpecName: "server-conf") pod "e21a851c-5179-4365-8e72-5dea16be90cc" (UID: "e21a851c-5179-4365-8e72-5dea16be90cc"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.681097 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e21a851c-5179-4365-8e72-5dea16be90cc-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.681140 4828 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.681150 4828 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e21a851c-5179-4365-8e72-5dea16be90cc-server-conf\") on node \"crc\" DevicePath \"\"" Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.682466 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e21a851c-5179-4365-8e72-5dea16be90cc-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e21a851c-5179-4365-8e72-5dea16be90cc" (UID: "e21a851c-5179-4365-8e72-5dea16be90cc"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:28:38 crc kubenswrapper[4828]: I1205 19:28:38.782594 4828 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e21a851c-5179-4365-8e72-5dea16be90cc-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.235435 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.293186 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50db8d67-b1c6-4165-a526-8149092660ed-rabbitmq-confd\") pod \"50db8d67-b1c6-4165-a526-8149092660ed\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.293278 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50db8d67-b1c6-4165-a526-8149092660ed-pod-info\") pod \"50db8d67-b1c6-4165-a526-8149092660ed\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.293331 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50db8d67-b1c6-4165-a526-8149092660ed-plugins-conf\") pod \"50db8d67-b1c6-4165-a526-8149092660ed\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.293410 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzhv4\" (UniqueName: \"kubernetes.io/projected/50db8d67-b1c6-4165-a526-8149092660ed-kube-api-access-rzhv4\") pod \"50db8d67-b1c6-4165-a526-8149092660ed\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.293466 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50db8d67-b1c6-4165-a526-8149092660ed-rabbitmq-tls\") pod \"50db8d67-b1c6-4165-a526-8149092660ed\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.293536 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50db8d67-b1c6-4165-a526-8149092660ed-config-data\") pod \"50db8d67-b1c6-4165-a526-8149092660ed\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.293580 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50db8d67-b1c6-4165-a526-8149092660ed-rabbitmq-plugins\") pod \"50db8d67-b1c6-4165-a526-8149092660ed\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.293634 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50db8d67-b1c6-4165-a526-8149092660ed-server-conf\") pod \"50db8d67-b1c6-4165-a526-8149092660ed\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.293744 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50db8d67-b1c6-4165-a526-8149092660ed-erlang-cookie-secret\") pod \"50db8d67-b1c6-4165-a526-8149092660ed\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.293778 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"50db8d67-b1c6-4165-a526-8149092660ed\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.293870 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50db8d67-b1c6-4165-a526-8149092660ed-rabbitmq-erlang-cookie\") pod \"50db8d67-b1c6-4165-a526-8149092660ed\" (UID: \"50db8d67-b1c6-4165-a526-8149092660ed\") " Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.295191 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50db8d67-b1c6-4165-a526-8149092660ed-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "50db8d67-b1c6-4165-a526-8149092660ed" (UID: "50db8d67-b1c6-4165-a526-8149092660ed"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.296005 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50db8d67-b1c6-4165-a526-8149092660ed-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "50db8d67-b1c6-4165-a526-8149092660ed" (UID: "50db8d67-b1c6-4165-a526-8149092660ed"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.296179 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50db8d67-b1c6-4165-a526-8149092660ed-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "50db8d67-b1c6-4165-a526-8149092660ed" (UID: "50db8d67-b1c6-4165-a526-8149092660ed"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.301789 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/50db8d67-b1c6-4165-a526-8149092660ed-pod-info" (OuterVolumeSpecName: "pod-info") pod "50db8d67-b1c6-4165-a526-8149092660ed" (UID: "50db8d67-b1c6-4165-a526-8149092660ed"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.302047 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "50db8d67-b1c6-4165-a526-8149092660ed" (UID: "50db8d67-b1c6-4165-a526-8149092660ed"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.305357 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50db8d67-b1c6-4165-a526-8149092660ed-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "50db8d67-b1c6-4165-a526-8149092660ed" (UID: "50db8d67-b1c6-4165-a526-8149092660ed"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.305406 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50db8d67-b1c6-4165-a526-8149092660ed-kube-api-access-rzhv4" (OuterVolumeSpecName: "kube-api-access-rzhv4") pod "50db8d67-b1c6-4165-a526-8149092660ed" (UID: "50db8d67-b1c6-4165-a526-8149092660ed"). InnerVolumeSpecName "kube-api-access-rzhv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.324972 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50db8d67-b1c6-4165-a526-8149092660ed-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "50db8d67-b1c6-4165-a526-8149092660ed" (UID: "50db8d67-b1c6-4165-a526-8149092660ed"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.350421 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50db8d67-b1c6-4165-a526-8149092660ed-config-data" (OuterVolumeSpecName: "config-data") pod "50db8d67-b1c6-4165-a526-8149092660ed" (UID: "50db8d67-b1c6-4165-a526-8149092660ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.382409 4828 generic.go:334] "Generic (PLEG): container finished" podID="50db8d67-b1c6-4165-a526-8149092660ed" containerID="faa047bd8fabc68e7f3380ac9d548802a2ae382000c17bd0135f658ee46ff4ae" exitCode=0 Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.382498 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.384058 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.384355 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"50db8d67-b1c6-4165-a526-8149092660ed","Type":"ContainerDied","Data":"faa047bd8fabc68e7f3380ac9d548802a2ae382000c17bd0135f658ee46ff4ae"} Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.384407 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"50db8d67-b1c6-4165-a526-8149092660ed","Type":"ContainerDied","Data":"6c7ad0e64c226782f30300bb540d0049375bbf6a23ae19ecbdc1ef787775e856"} Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.384430 4828 scope.go:117] "RemoveContainer" containerID="faa047bd8fabc68e7f3380ac9d548802a2ae382000c17bd0135f658ee46ff4ae" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.396728 4828 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50db8d67-b1c6-4165-a526-8149092660ed-pod-info\") on node \"crc\" DevicePath \"\"" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.396753 4828 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50db8d67-b1c6-4165-a526-8149092660ed-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.396766 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzhv4\" (UniqueName: \"kubernetes.io/projected/50db8d67-b1c6-4165-a526-8149092660ed-kube-api-access-rzhv4\") on node \"crc\" DevicePath \"\"" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.396775 4828 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50db8d67-b1c6-4165-a526-8149092660ed-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.396783 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50db8d67-b1c6-4165-a526-8149092660ed-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.396790 4828 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50db8d67-b1c6-4165-a526-8149092660ed-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.396800 4828 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50db8d67-b1c6-4165-a526-8149092660ed-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.396839 4828 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.396850 4828 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50db8d67-b1c6-4165-a526-8149092660ed-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.407181 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50db8d67-b1c6-4165-a526-8149092660ed-server-conf" (OuterVolumeSpecName: "server-conf") pod "50db8d67-b1c6-4165-a526-8149092660ed" (UID: "50db8d67-b1c6-4165-a526-8149092660ed"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.421228 4828 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.433592 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.449928 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50db8d67-b1c6-4165-a526-8149092660ed-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "50db8d67-b1c6-4165-a526-8149092660ed" (UID: "50db8d67-b1c6-4165-a526-8149092660ed"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.450718 4828 scope.go:117] "RemoveContainer" containerID="e065a0fe74f9c385bcc5b2d72a845ca0945938e9796cd9e236af91876f1347fa" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.456446 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.474698 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 19:28:39 crc kubenswrapper[4828]: E1205 19:28:39.475139 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e21a851c-5179-4365-8e72-5dea16be90cc" containerName="rabbitmq" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.475156 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="e21a851c-5179-4365-8e72-5dea16be90cc" containerName="rabbitmq" Dec 05 19:28:39 crc kubenswrapper[4828]: E1205 19:28:39.475173 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50db8d67-b1c6-4165-a526-8149092660ed" containerName="rabbitmq" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.475181 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="50db8d67-b1c6-4165-a526-8149092660ed" containerName="rabbitmq" Dec 05 19:28:39 crc kubenswrapper[4828]: E1205 19:28:39.475213 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50db8d67-b1c6-4165-a526-8149092660ed" containerName="setup-container" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.475223 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="50db8d67-b1c6-4165-a526-8149092660ed" containerName="setup-container" Dec 05 19:28:39 crc kubenswrapper[4828]: E1205 19:28:39.475232 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e21a851c-5179-4365-8e72-5dea16be90cc" containerName="setup-container" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.475239 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="e21a851c-5179-4365-8e72-5dea16be90cc" containerName="setup-container" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.475464 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="e21a851c-5179-4365-8e72-5dea16be90cc" containerName="rabbitmq" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.475494 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="50db8d67-b1c6-4165-a526-8149092660ed" containerName="rabbitmq" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.476487 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.486315 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.488488 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.488694 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.488716 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.488891 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.488904 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.489356 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.489520 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-sbwjw" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.498768 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/63ac6b69-a1ea-4b8d-9532-679d79cd1a87-pod-info\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.498847 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/63ac6b69-a1ea-4b8d-9532-679d79cd1a87-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.498887 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqj5c\" (UniqueName: \"kubernetes.io/projected/63ac6b69-a1ea-4b8d-9532-679d79cd1a87-kube-api-access-dqj5c\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.498915 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63ac6b69-a1ea-4b8d-9532-679d79cd1a87-config-data\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.498931 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/63ac6b69-a1ea-4b8d-9532-679d79cd1a87-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.498986 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/63ac6b69-a1ea-4b8d-9532-679d79cd1a87-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.499003 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/63ac6b69-a1ea-4b8d-9532-679d79cd1a87-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.499033 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.499049 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/63ac6b69-a1ea-4b8d-9532-679d79cd1a87-server-conf\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.499164 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/63ac6b69-a1ea-4b8d-9532-679d79cd1a87-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.499273 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/63ac6b69-a1ea-4b8d-9532-679d79cd1a87-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.499340 4828 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.499569 4828 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50db8d67-b1c6-4165-a526-8149092660ed-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.499592 4828 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50db8d67-b1c6-4165-a526-8149092660ed-server-conf\") on node \"crc\" DevicePath \"\"" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.517862 4828 scope.go:117] "RemoveContainer" containerID="faa047bd8fabc68e7f3380ac9d548802a2ae382000c17bd0135f658ee46ff4ae" Dec 05 19:28:39 crc kubenswrapper[4828]: E1205 19:28:39.518277 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faa047bd8fabc68e7f3380ac9d548802a2ae382000c17bd0135f658ee46ff4ae\": container with ID starting with faa047bd8fabc68e7f3380ac9d548802a2ae382000c17bd0135f658ee46ff4ae not found: ID does not exist" containerID="faa047bd8fabc68e7f3380ac9d548802a2ae382000c17bd0135f658ee46ff4ae" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.518330 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faa047bd8fabc68e7f3380ac9d548802a2ae382000c17bd0135f658ee46ff4ae"} err="failed to get container status \"faa047bd8fabc68e7f3380ac9d548802a2ae382000c17bd0135f658ee46ff4ae\": rpc error: code = NotFound desc = could not find container \"faa047bd8fabc68e7f3380ac9d548802a2ae382000c17bd0135f658ee46ff4ae\": container with ID starting with faa047bd8fabc68e7f3380ac9d548802a2ae382000c17bd0135f658ee46ff4ae not found: ID does not exist" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.518356 4828 scope.go:117] "RemoveContainer" containerID="e065a0fe74f9c385bcc5b2d72a845ca0945938e9796cd9e236af91876f1347fa" Dec 05 19:28:39 crc kubenswrapper[4828]: E1205 19:28:39.519120 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e065a0fe74f9c385bcc5b2d72a845ca0945938e9796cd9e236af91876f1347fa\": container with ID starting with e065a0fe74f9c385bcc5b2d72a845ca0945938e9796cd9e236af91876f1347fa not found: ID does not exist" containerID="e065a0fe74f9c385bcc5b2d72a845ca0945938e9796cd9e236af91876f1347fa" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.519175 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e065a0fe74f9c385bcc5b2d72a845ca0945938e9796cd9e236af91876f1347fa"} err="failed to get container status \"e065a0fe74f9c385bcc5b2d72a845ca0945938e9796cd9e236af91876f1347fa\": rpc error: code = NotFound desc = could not find container \"e065a0fe74f9c385bcc5b2d72a845ca0945938e9796cd9e236af91876f1347fa\": container with ID starting with e065a0fe74f9c385bcc5b2d72a845ca0945938e9796cd9e236af91876f1347fa not found: ID does not exist" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.601471 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/63ac6b69-a1ea-4b8d-9532-679d79cd1a87-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.601696 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/63ac6b69-a1ea-4b8d-9532-679d79cd1a87-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.601808 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.601916 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/63ac6b69-a1ea-4b8d-9532-679d79cd1a87-server-conf\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.602034 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/63ac6b69-a1ea-4b8d-9532-679d79cd1a87-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.601937 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.602448 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/63ac6b69-a1ea-4b8d-9532-679d79cd1a87-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.602327 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/63ac6b69-a1ea-4b8d-9532-679d79cd1a87-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.602532 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/63ac6b69-a1ea-4b8d-9532-679d79cd1a87-pod-info\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.602559 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/63ac6b69-a1ea-4b8d-9532-679d79cd1a87-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.602601 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqj5c\" (UniqueName: \"kubernetes.io/projected/63ac6b69-a1ea-4b8d-9532-679d79cd1a87-kube-api-access-dqj5c\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.602644 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/63ac6b69-a1ea-4b8d-9532-679d79cd1a87-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.602672 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63ac6b69-a1ea-4b8d-9532-679d79cd1a87-config-data\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.603222 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/63ac6b69-a1ea-4b8d-9532-679d79cd1a87-server-conf\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.603278 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63ac6b69-a1ea-4b8d-9532-679d79cd1a87-config-data\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.603705 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/63ac6b69-a1ea-4b8d-9532-679d79cd1a87-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.603734 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/63ac6b69-a1ea-4b8d-9532-679d79cd1a87-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.605904 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/63ac6b69-a1ea-4b8d-9532-679d79cd1a87-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.605935 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/63ac6b69-a1ea-4b8d-9532-679d79cd1a87-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.609380 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/63ac6b69-a1ea-4b8d-9532-679d79cd1a87-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.610264 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/63ac6b69-a1ea-4b8d-9532-679d79cd1a87-pod-info\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.621429 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqj5c\" (UniqueName: \"kubernetes.io/projected/63ac6b69-a1ea-4b8d-9532-679d79cd1a87-kube-api-access-dqj5c\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.638142 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"63ac6b69-a1ea-4b8d-9532-679d79cd1a87\") " pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.747381 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.763858 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.776959 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.779087 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.788816 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.789130 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.791627 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-l4l5h" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.791792 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.792039 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.792206 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.792381 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.802810 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.812539 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.913492 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/97ef01a4-c35c-41a0-abf1-1fbb83ff67e6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.913903 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/97ef01a4-c35c-41a0-abf1-1fbb83ff67e6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.913997 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/97ef01a4-c35c-41a0-abf1-1fbb83ff67e6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.914099 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/97ef01a4-c35c-41a0-abf1-1fbb83ff67e6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.914138 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/97ef01a4-c35c-41a0-abf1-1fbb83ff67e6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.914162 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/97ef01a4-c35c-41a0-abf1-1fbb83ff67e6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.914259 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzqg7\" (UniqueName: \"kubernetes.io/projected/97ef01a4-c35c-41a0-abf1-1fbb83ff67e6-kube-api-access-dzqg7\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.914283 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.914326 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/97ef01a4-c35c-41a0-abf1-1fbb83ff67e6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.914349 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/97ef01a4-c35c-41a0-abf1-1fbb83ff67e6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:39 crc kubenswrapper[4828]: I1205 19:28:39.914388 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/97ef01a4-c35c-41a0-abf1-1fbb83ff67e6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:40 crc kubenswrapper[4828]: I1205 19:28:40.017213 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/97ef01a4-c35c-41a0-abf1-1fbb83ff67e6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:40 crc kubenswrapper[4828]: I1205 19:28:40.017267 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/97ef01a4-c35c-41a0-abf1-1fbb83ff67e6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:40 crc kubenswrapper[4828]: I1205 19:28:40.017290 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/97ef01a4-c35c-41a0-abf1-1fbb83ff67e6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:40 crc kubenswrapper[4828]: I1205 19:28:40.017342 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzqg7\" (UniqueName: \"kubernetes.io/projected/97ef01a4-c35c-41a0-abf1-1fbb83ff67e6-kube-api-access-dzqg7\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:40 crc kubenswrapper[4828]: I1205 19:28:40.017366 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:40 crc kubenswrapper[4828]: I1205 19:28:40.017398 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/97ef01a4-c35c-41a0-abf1-1fbb83ff67e6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:40 crc kubenswrapper[4828]: I1205 19:28:40.017418 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/97ef01a4-c35c-41a0-abf1-1fbb83ff67e6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:40 crc kubenswrapper[4828]: I1205 19:28:40.017448 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/97ef01a4-c35c-41a0-abf1-1fbb83ff67e6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:40 crc kubenswrapper[4828]: I1205 19:28:40.017499 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/97ef01a4-c35c-41a0-abf1-1fbb83ff67e6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:40 crc kubenswrapper[4828]: I1205 19:28:40.017542 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/97ef01a4-c35c-41a0-abf1-1fbb83ff67e6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:40 crc kubenswrapper[4828]: I1205 19:28:40.017594 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/97ef01a4-c35c-41a0-abf1-1fbb83ff67e6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:40 crc kubenswrapper[4828]: I1205 19:28:40.018982 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:40 crc kubenswrapper[4828]: I1205 19:28:40.019740 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/97ef01a4-c35c-41a0-abf1-1fbb83ff67e6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:40 crc kubenswrapper[4828]: I1205 19:28:40.022358 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/97ef01a4-c35c-41a0-abf1-1fbb83ff67e6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:40 crc kubenswrapper[4828]: I1205 19:28:40.023238 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/97ef01a4-c35c-41a0-abf1-1fbb83ff67e6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:40 crc kubenswrapper[4828]: I1205 19:28:40.025219 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/97ef01a4-c35c-41a0-abf1-1fbb83ff67e6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:40 crc kubenswrapper[4828]: I1205 19:28:40.025182 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/97ef01a4-c35c-41a0-abf1-1fbb83ff67e6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:40 crc kubenswrapper[4828]: I1205 19:28:40.025445 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/97ef01a4-c35c-41a0-abf1-1fbb83ff67e6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:40 crc kubenswrapper[4828]: I1205 19:28:40.026367 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/97ef01a4-c35c-41a0-abf1-1fbb83ff67e6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:40 crc kubenswrapper[4828]: I1205 19:28:40.026975 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/97ef01a4-c35c-41a0-abf1-1fbb83ff67e6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:40 crc kubenswrapper[4828]: I1205 19:28:40.027116 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/97ef01a4-c35c-41a0-abf1-1fbb83ff67e6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:40 crc kubenswrapper[4828]: I1205 19:28:40.039380 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzqg7\" (UniqueName: \"kubernetes.io/projected/97ef01a4-c35c-41a0-abf1-1fbb83ff67e6-kube-api-access-dzqg7\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:40 crc kubenswrapper[4828]: I1205 19:28:40.077983 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:40 crc kubenswrapper[4828]: I1205 19:28:40.108324 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:28:40 crc kubenswrapper[4828]: I1205 19:28:40.248499 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 19:28:40 crc kubenswrapper[4828]: I1205 19:28:40.391414 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"63ac6b69-a1ea-4b8d-9532-679d79cd1a87","Type":"ContainerStarted","Data":"fe98d7384c714efe517a0cfb70eb8fdb2bc2c48806e6f324e6b73ceaa9cc7c7d"} Dec 05 19:28:40 crc kubenswrapper[4828]: I1205 19:28:40.456007 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50db8d67-b1c6-4165-a526-8149092660ed" path="/var/lib/kubelet/pods/50db8d67-b1c6-4165-a526-8149092660ed/volumes" Dec 05 19:28:40 crc kubenswrapper[4828]: I1205 19:28:40.457280 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e21a851c-5179-4365-8e72-5dea16be90cc" path="/var/lib/kubelet/pods/e21a851c-5179-4365-8e72-5dea16be90cc/volumes" Dec 05 19:28:40 crc kubenswrapper[4828]: I1205 19:28:40.554341 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 19:28:40 crc kubenswrapper[4828]: W1205 19:28:40.556115 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97ef01a4_c35c_41a0_abf1_1fbb83ff67e6.slice/crio-1519b148b97e731264c98febac7bfe74c1f52b4b2bdcc149d84382d1a28c05b4 WatchSource:0}: Error finding container 1519b148b97e731264c98febac7bfe74c1f52b4b2bdcc149d84382d1a28c05b4: Status 404 returned error can't find the container with id 1519b148b97e731264c98febac7bfe74c1f52b4b2bdcc149d84382d1a28c05b4 Dec 05 19:28:41 crc kubenswrapper[4828]: I1205 19:28:41.407789 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6","Type":"ContainerStarted","Data":"1519b148b97e731264c98febac7bfe74c1f52b4b2bdcc149d84382d1a28c05b4"} Dec 05 19:28:42 crc kubenswrapper[4828]: I1205 19:28:42.434054 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6","Type":"ContainerStarted","Data":"18ad3e79de51a51bb3aee39e13e6cd32bf348ac02e03dbef9ac5439f77a138cd"} Dec 05 19:28:42 crc kubenswrapper[4828]: I1205 19:28:42.438763 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"63ac6b69-a1ea-4b8d-9532-679d79cd1a87","Type":"ContainerStarted","Data":"099fabdf0fc06b01126728e9420079493998965a2dce55f56850f5739da1533e"} Dec 05 19:28:45 crc kubenswrapper[4828]: I1205 19:28:45.117945 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:28:45 crc kubenswrapper[4828]: I1205 19:28:45.119703 4828 scope.go:117] "RemoveContainer" containerID="d000229fa1db508cef366e145d044d5816652c2a9c5bba1cd918b2052aa0438a" Dec 05 19:28:45 crc kubenswrapper[4828]: E1205 19:28:45.120259 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:29:00 crc kubenswrapper[4828]: I1205 19:29:00.447335 4828 scope.go:117] "RemoveContainer" containerID="d000229fa1db508cef366e145d044d5816652c2a9c5bba1cd918b2052aa0438a" Dec 05 19:29:01 crc kubenswrapper[4828]: I1205 19:29:01.626841 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" event={"ID":"03c4fc5d-6be1-47b4-9c39-7bb86046dafd","Type":"ContainerStarted","Data":"7d8435f242c38118d7f0cc40add4f792e71bcab239d7df82aa8fd6e2f7e074fd"} Dec 05 19:29:01 crc kubenswrapper[4828]: I1205 19:29:01.627918 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:29:05 crc kubenswrapper[4828]: I1205 19:29:05.126899 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:29:05 crc kubenswrapper[4828]: I1205 19:29:05.260196 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:29:05 crc kubenswrapper[4828]: I1205 19:29:05.260258 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:29:14 crc kubenswrapper[4828]: I1205 19:29:14.766619 4828 generic.go:334] "Generic (PLEG): container finished" podID="63ac6b69-a1ea-4b8d-9532-679d79cd1a87" containerID="099fabdf0fc06b01126728e9420079493998965a2dce55f56850f5739da1533e" exitCode=0 Dec 05 19:29:14 crc kubenswrapper[4828]: I1205 19:29:14.766737 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"63ac6b69-a1ea-4b8d-9532-679d79cd1a87","Type":"ContainerDied","Data":"099fabdf0fc06b01126728e9420079493998965a2dce55f56850f5739da1533e"} Dec 05 19:29:14 crc kubenswrapper[4828]: I1205 19:29:14.771019 4828 generic.go:334] "Generic (PLEG): container finished" podID="97ef01a4-c35c-41a0-abf1-1fbb83ff67e6" containerID="18ad3e79de51a51bb3aee39e13e6cd32bf348ac02e03dbef9ac5439f77a138cd" exitCode=0 Dec 05 19:29:14 crc kubenswrapper[4828]: I1205 19:29:14.771057 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6","Type":"ContainerDied","Data":"18ad3e79de51a51bb3aee39e13e6cd32bf348ac02e03dbef9ac5439f77a138cd"} Dec 05 19:29:15 crc kubenswrapper[4828]: I1205 19:29:15.782768 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"63ac6b69-a1ea-4b8d-9532-679d79cd1a87","Type":"ContainerStarted","Data":"4226ee76290ba3ca2908c3bb0ac13a06a5b37f407b39b56a4957fbf5d4573f9c"} Dec 05 19:29:15 crc kubenswrapper[4828]: I1205 19:29:15.783421 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 05 19:29:15 crc kubenswrapper[4828]: I1205 19:29:15.785257 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"97ef01a4-c35c-41a0-abf1-1fbb83ff67e6","Type":"ContainerStarted","Data":"c66a9cb8f719e0a42eb20f23f94e7a1c7a95a63bcefadfa337b3e194171f0913"} Dec 05 19:29:15 crc kubenswrapper[4828]: I1205 19:29:15.785521 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:29:15 crc kubenswrapper[4828]: I1205 19:29:15.846112 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.846090049 podStartE2EDuration="36.846090049s" podCreationTimestamp="2025-12-05 19:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:29:15.814182057 +0000 UTC m=+1533.709404373" watchObservedRunningTime="2025-12-05 19:29:15.846090049 +0000 UTC m=+1533.741312375" Dec 05 19:29:15 crc kubenswrapper[4828]: I1205 19:29:15.849350 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.849333336 podStartE2EDuration="36.849333336s" podCreationTimestamp="2025-12-05 19:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:29:15.837220709 +0000 UTC m=+1533.732443025" watchObservedRunningTime="2025-12-05 19:29:15.849333336 +0000 UTC m=+1533.744555652" Dec 05 19:29:27 crc kubenswrapper[4828]: I1205 19:29:27.395542 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wv84v"] Dec 05 19:29:27 crc kubenswrapper[4828]: I1205 19:29:27.399813 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wv84v" Dec 05 19:29:27 crc kubenswrapper[4828]: I1205 19:29:27.413204 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wv84v"] Dec 05 19:29:27 crc kubenswrapper[4828]: I1205 19:29:27.445327 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpbnw\" (UniqueName: \"kubernetes.io/projected/5640171a-b010-4e99-8ec3-544201aebbcf-kube-api-access-mpbnw\") pod \"certified-operators-wv84v\" (UID: \"5640171a-b010-4e99-8ec3-544201aebbcf\") " pod="openshift-marketplace/certified-operators-wv84v" Dec 05 19:29:27 crc kubenswrapper[4828]: I1205 19:29:27.445396 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5640171a-b010-4e99-8ec3-544201aebbcf-utilities\") pod \"certified-operators-wv84v\" (UID: \"5640171a-b010-4e99-8ec3-544201aebbcf\") " pod="openshift-marketplace/certified-operators-wv84v" Dec 05 19:29:27 crc kubenswrapper[4828]: I1205 19:29:27.445498 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5640171a-b010-4e99-8ec3-544201aebbcf-catalog-content\") pod \"certified-operators-wv84v\" (UID: \"5640171a-b010-4e99-8ec3-544201aebbcf\") " pod="openshift-marketplace/certified-operators-wv84v" Dec 05 19:29:27 crc kubenswrapper[4828]: I1205 19:29:27.547235 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpbnw\" (UniqueName: \"kubernetes.io/projected/5640171a-b010-4e99-8ec3-544201aebbcf-kube-api-access-mpbnw\") pod \"certified-operators-wv84v\" (UID: \"5640171a-b010-4e99-8ec3-544201aebbcf\") " pod="openshift-marketplace/certified-operators-wv84v" Dec 05 19:29:27 crc kubenswrapper[4828]: I1205 19:29:27.547608 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5640171a-b010-4e99-8ec3-544201aebbcf-utilities\") pod \"certified-operators-wv84v\" (UID: \"5640171a-b010-4e99-8ec3-544201aebbcf\") " pod="openshift-marketplace/certified-operators-wv84v" Dec 05 19:29:27 crc kubenswrapper[4828]: I1205 19:29:27.547696 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5640171a-b010-4e99-8ec3-544201aebbcf-catalog-content\") pod \"certified-operators-wv84v\" (UID: \"5640171a-b010-4e99-8ec3-544201aebbcf\") " pod="openshift-marketplace/certified-operators-wv84v" Dec 05 19:29:27 crc kubenswrapper[4828]: I1205 19:29:27.548277 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5640171a-b010-4e99-8ec3-544201aebbcf-catalog-content\") pod \"certified-operators-wv84v\" (UID: \"5640171a-b010-4e99-8ec3-544201aebbcf\") " pod="openshift-marketplace/certified-operators-wv84v" Dec 05 19:29:27 crc kubenswrapper[4828]: I1205 19:29:27.548816 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5640171a-b010-4e99-8ec3-544201aebbcf-utilities\") pod \"certified-operators-wv84v\" (UID: \"5640171a-b010-4e99-8ec3-544201aebbcf\") " pod="openshift-marketplace/certified-operators-wv84v" Dec 05 19:29:27 crc kubenswrapper[4828]: I1205 19:29:27.571893 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpbnw\" (UniqueName: \"kubernetes.io/projected/5640171a-b010-4e99-8ec3-544201aebbcf-kube-api-access-mpbnw\") pod \"certified-operators-wv84v\" (UID: \"5640171a-b010-4e99-8ec3-544201aebbcf\") " pod="openshift-marketplace/certified-operators-wv84v" Dec 05 19:29:27 crc kubenswrapper[4828]: I1205 19:29:27.731635 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wv84v" Dec 05 19:29:28 crc kubenswrapper[4828]: I1205 19:29:28.492450 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wv84v"] Dec 05 19:29:28 crc kubenswrapper[4828]: I1205 19:29:28.972130 4828 generic.go:334] "Generic (PLEG): container finished" podID="5640171a-b010-4e99-8ec3-544201aebbcf" containerID="0df1b3d7debf656c526d2e87c34ab46c5d667ec262e559f4470845c0fefdd03b" exitCode=0 Dec 05 19:29:28 crc kubenswrapper[4828]: I1205 19:29:28.972243 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wv84v" event={"ID":"5640171a-b010-4e99-8ec3-544201aebbcf","Type":"ContainerDied","Data":"0df1b3d7debf656c526d2e87c34ab46c5d667ec262e559f4470845c0fefdd03b"} Dec 05 19:29:28 crc kubenswrapper[4828]: I1205 19:29:28.972505 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wv84v" event={"ID":"5640171a-b010-4e99-8ec3-544201aebbcf","Type":"ContainerStarted","Data":"024c0d953e7f1a415e5148d5fb3a13ce2425596624de6765673880c2d2b59e41"} Dec 05 19:29:29 crc kubenswrapper[4828]: I1205 19:29:29.815990 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 05 19:29:29 crc kubenswrapper[4828]: I1205 19:29:29.982741 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wv84v" event={"ID":"5640171a-b010-4e99-8ec3-544201aebbcf","Type":"ContainerStarted","Data":"737068f5ead7c9ec359f0677ab8305b3eea6f29d056da07eec6ab829ca29e5cd"} Dec 05 19:29:30 crc kubenswrapper[4828]: I1205 19:29:30.114302 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 05 19:29:30 crc kubenswrapper[4828]: I1205 19:29:30.587672 4828 scope.go:117] "RemoveContainer" containerID="c399f137e8476671322f5c3df219c0542ea3413c956c623f62ca825d6f30ae5d" Dec 05 19:29:30 crc kubenswrapper[4828]: I1205 19:29:30.612167 4828 scope.go:117] "RemoveContainer" containerID="dc1dc77b3357801ba19f4888d4499f413ec8a380f40145c8bb2cec1a9cbb5018" Dec 05 19:29:30 crc kubenswrapper[4828]: I1205 19:29:30.996609 4828 generic.go:334] "Generic (PLEG): container finished" podID="5640171a-b010-4e99-8ec3-544201aebbcf" containerID="737068f5ead7c9ec359f0677ab8305b3eea6f29d056da07eec6ab829ca29e5cd" exitCode=0 Dec 05 19:29:30 crc kubenswrapper[4828]: I1205 19:29:30.996681 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wv84v" event={"ID":"5640171a-b010-4e99-8ec3-544201aebbcf","Type":"ContainerDied","Data":"737068f5ead7c9ec359f0677ab8305b3eea6f29d056da07eec6ab829ca29e5cd"} Dec 05 19:29:31 crc kubenswrapper[4828]: I1205 19:29:31.130155 4828 scope.go:117] "RemoveContainer" containerID="8027cae6e99adaa1b3c37200de9135597bae28c60cc1d757ebc7b66cf2e51506" Dec 05 19:29:32 crc kubenswrapper[4828]: I1205 19:29:32.007351 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wv84v" event={"ID":"5640171a-b010-4e99-8ec3-544201aebbcf","Type":"ContainerStarted","Data":"552b90f4d17d90bc319ece345bbf9c07e3cc4f6c7defc8fdcab4bbb566034947"} Dec 05 19:29:32 crc kubenswrapper[4828]: I1205 19:29:32.034543 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wv84v" podStartSLOduration=2.289707022 podStartE2EDuration="5.034525108s" podCreationTimestamp="2025-12-05 19:29:27 +0000 UTC" firstStartedPulling="2025-12-05 19:29:28.97455905 +0000 UTC m=+1546.869781346" lastFinishedPulling="2025-12-05 19:29:31.719377126 +0000 UTC m=+1549.614599432" observedRunningTime="2025-12-05 19:29:32.03164361 +0000 UTC m=+1549.926865916" watchObservedRunningTime="2025-12-05 19:29:32.034525108 +0000 UTC m=+1549.929747414" Dec 05 19:29:33 crc kubenswrapper[4828]: I1205 19:29:33.632926 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-z65kd"] Dec 05 19:29:33 crc kubenswrapper[4828]: I1205 19:29:33.634996 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" Dec 05 19:29:33 crc kubenswrapper[4828]: I1205 19:29:33.637494 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 05 19:29:33 crc kubenswrapper[4828]: I1205 19:29:33.657954 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-z65kd"] Dec 05 19:29:33 crc kubenswrapper[4828]: I1205 19:29:33.679660 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-z65kd\" (UID: \"177ab816-ea50-4af9-add2-9e671249b133\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" Dec 05 19:29:33 crc kubenswrapper[4828]: I1205 19:29:33.679705 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-z65kd\" (UID: \"177ab816-ea50-4af9-add2-9e671249b133\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" Dec 05 19:29:33 crc kubenswrapper[4828]: I1205 19:29:33.679728 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-z65kd\" (UID: \"177ab816-ea50-4af9-add2-9e671249b133\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" Dec 05 19:29:33 crc kubenswrapper[4828]: I1205 19:29:33.679777 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjz8v\" (UniqueName: \"kubernetes.io/projected/177ab816-ea50-4af9-add2-9e671249b133-kube-api-access-zjz8v\") pod \"dnsmasq-dns-79bd4cc8c9-z65kd\" (UID: \"177ab816-ea50-4af9-add2-9e671249b133\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" Dec 05 19:29:33 crc kubenswrapper[4828]: I1205 19:29:33.680027 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-z65kd\" (UID: \"177ab816-ea50-4af9-add2-9e671249b133\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" Dec 05 19:29:33 crc kubenswrapper[4828]: I1205 19:29:33.680081 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-config\") pod \"dnsmasq-dns-79bd4cc8c9-z65kd\" (UID: \"177ab816-ea50-4af9-add2-9e671249b133\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" Dec 05 19:29:33 crc kubenswrapper[4828]: I1205 19:29:33.680143 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-z65kd\" (UID: \"177ab816-ea50-4af9-add2-9e671249b133\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" Dec 05 19:29:33 crc kubenswrapper[4828]: I1205 19:29:33.782022 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-z65kd\" (UID: \"177ab816-ea50-4af9-add2-9e671249b133\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" Dec 05 19:29:33 crc kubenswrapper[4828]: I1205 19:29:33.782086 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-z65kd\" (UID: \"177ab816-ea50-4af9-add2-9e671249b133\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" Dec 05 19:29:33 crc kubenswrapper[4828]: I1205 19:29:33.782122 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjz8v\" (UniqueName: \"kubernetes.io/projected/177ab816-ea50-4af9-add2-9e671249b133-kube-api-access-zjz8v\") pod \"dnsmasq-dns-79bd4cc8c9-z65kd\" (UID: \"177ab816-ea50-4af9-add2-9e671249b133\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" Dec 05 19:29:33 crc kubenswrapper[4828]: I1205 19:29:33.782298 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-z65kd\" (UID: \"177ab816-ea50-4af9-add2-9e671249b133\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" Dec 05 19:29:33 crc kubenswrapper[4828]: I1205 19:29:33.782361 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-config\") pod \"dnsmasq-dns-79bd4cc8c9-z65kd\" (UID: \"177ab816-ea50-4af9-add2-9e671249b133\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" Dec 05 19:29:33 crc kubenswrapper[4828]: I1205 19:29:33.782431 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-z65kd\" (UID: \"177ab816-ea50-4af9-add2-9e671249b133\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" Dec 05 19:29:33 crc kubenswrapper[4828]: I1205 19:29:33.782463 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-z65kd\" (UID: \"177ab816-ea50-4af9-add2-9e671249b133\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" Dec 05 19:29:33 crc kubenswrapper[4828]: I1205 19:29:33.782888 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-z65kd\" (UID: \"177ab816-ea50-4af9-add2-9e671249b133\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" Dec 05 19:29:33 crc kubenswrapper[4828]: I1205 19:29:33.783086 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-z65kd\" (UID: \"177ab816-ea50-4af9-add2-9e671249b133\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" Dec 05 19:29:33 crc kubenswrapper[4828]: I1205 19:29:33.783512 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-z65kd\" (UID: \"177ab816-ea50-4af9-add2-9e671249b133\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" Dec 05 19:29:33 crc kubenswrapper[4828]: I1205 19:29:33.783883 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-z65kd\" (UID: \"177ab816-ea50-4af9-add2-9e671249b133\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" Dec 05 19:29:33 crc kubenswrapper[4828]: I1205 19:29:33.783929 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-config\") pod \"dnsmasq-dns-79bd4cc8c9-z65kd\" (UID: \"177ab816-ea50-4af9-add2-9e671249b133\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" Dec 05 19:29:33 crc kubenswrapper[4828]: I1205 19:29:33.784008 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-z65kd\" (UID: \"177ab816-ea50-4af9-add2-9e671249b133\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" Dec 05 19:29:33 crc kubenswrapper[4828]: I1205 19:29:33.805629 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjz8v\" (UniqueName: \"kubernetes.io/projected/177ab816-ea50-4af9-add2-9e671249b133-kube-api-access-zjz8v\") pod \"dnsmasq-dns-79bd4cc8c9-z65kd\" (UID: \"177ab816-ea50-4af9-add2-9e671249b133\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" Dec 05 19:29:33 crc kubenswrapper[4828]: I1205 19:29:33.953999 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" Dec 05 19:29:34 crc kubenswrapper[4828]: I1205 19:29:34.412614 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-z65kd"] Dec 05 19:29:35 crc kubenswrapper[4828]: I1205 19:29:35.040198 4828 generic.go:334] "Generic (PLEG): container finished" podID="177ab816-ea50-4af9-add2-9e671249b133" containerID="d1aa029b40479f3cd230a03df83eac74ec3b126845a794080f6d5e10a7c4df8e" exitCode=0 Dec 05 19:29:35 crc kubenswrapper[4828]: I1205 19:29:35.040293 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" event={"ID":"177ab816-ea50-4af9-add2-9e671249b133","Type":"ContainerDied","Data":"d1aa029b40479f3cd230a03df83eac74ec3b126845a794080f6d5e10a7c4df8e"} Dec 05 19:29:35 crc kubenswrapper[4828]: I1205 19:29:35.040555 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" event={"ID":"177ab816-ea50-4af9-add2-9e671249b133","Type":"ContainerStarted","Data":"617eb0a8b5095e4fd4ac5f564059396d2d000e7e19cfe37eb1c3d5b00e575fd2"} Dec 05 19:29:35 crc kubenswrapper[4828]: I1205 19:29:35.259980 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:29:35 crc kubenswrapper[4828]: I1205 19:29:35.260040 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:29:35 crc kubenswrapper[4828]: I1205 19:29:35.260080 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" Dec 05 19:29:35 crc kubenswrapper[4828]: I1205 19:29:35.260800 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"44273427ab956efcfe69105cf20e92501e73b81a7dc35341e1cfc9b1dd7be2f7"} pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 19:29:35 crc kubenswrapper[4828]: I1205 19:29:35.260892 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" containerID="cri-o://44273427ab956efcfe69105cf20e92501e73b81a7dc35341e1cfc9b1dd7be2f7" gracePeriod=600 Dec 05 19:29:35 crc kubenswrapper[4828]: E1205 19:29:35.928687 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:29:36 crc kubenswrapper[4828]: I1205 19:29:36.056416 4828 generic.go:334] "Generic (PLEG): container finished" podID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerID="44273427ab956efcfe69105cf20e92501e73b81a7dc35341e1cfc9b1dd7be2f7" exitCode=0 Dec 05 19:29:36 crc kubenswrapper[4828]: I1205 19:29:36.056470 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerDied","Data":"44273427ab956efcfe69105cf20e92501e73b81a7dc35341e1cfc9b1dd7be2f7"} Dec 05 19:29:36 crc kubenswrapper[4828]: I1205 19:29:36.056555 4828 scope.go:117] "RemoveContainer" containerID="aab20e62cb85e96facfecb4602cb199c408644c9ab8b87bd02db08dd9a3628e0" Dec 05 19:29:36 crc kubenswrapper[4828]: I1205 19:29:36.057343 4828 scope.go:117] "RemoveContainer" containerID="44273427ab956efcfe69105cf20e92501e73b81a7dc35341e1cfc9b1dd7be2f7" Dec 05 19:29:36 crc kubenswrapper[4828]: E1205 19:29:36.057729 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:29:36 crc kubenswrapper[4828]: I1205 19:29:36.061234 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" event={"ID":"177ab816-ea50-4af9-add2-9e671249b133","Type":"ContainerStarted","Data":"1cc818f16aa820b07fbb99935fd8b4628e7a697a0085b13631d7e2f90b2997e7"} Dec 05 19:29:36 crc kubenswrapper[4828]: I1205 19:29:36.061529 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" Dec 05 19:29:36 crc kubenswrapper[4828]: I1205 19:29:36.159225 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" podStartSLOduration=3.159198832 podStartE2EDuration="3.159198832s" podCreationTimestamp="2025-12-05 19:29:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:29:36.11063711 +0000 UTC m=+1554.005859436" watchObservedRunningTime="2025-12-05 19:29:36.159198832 +0000 UTC m=+1554.054421138" Dec 05 19:29:37 crc kubenswrapper[4828]: I1205 19:29:37.731763 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wv84v" Dec 05 19:29:37 crc kubenswrapper[4828]: I1205 19:29:37.732102 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wv84v" Dec 05 19:29:37 crc kubenswrapper[4828]: I1205 19:29:37.794012 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wv84v" Dec 05 19:29:38 crc kubenswrapper[4828]: I1205 19:29:38.127541 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wv84v" Dec 05 19:29:38 crc kubenswrapper[4828]: I1205 19:29:38.172847 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wv84v"] Dec 05 19:29:40 crc kubenswrapper[4828]: I1205 19:29:40.098538 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wv84v" podUID="5640171a-b010-4e99-8ec3-544201aebbcf" containerName="registry-server" containerID="cri-o://552b90f4d17d90bc319ece345bbf9c07e3cc4f6c7defc8fdcab4bbb566034947" gracePeriod=2 Dec 05 19:29:40 crc kubenswrapper[4828]: I1205 19:29:40.587076 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wv84v" Dec 05 19:29:40 crc kubenswrapper[4828]: I1205 19:29:40.623847 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5640171a-b010-4e99-8ec3-544201aebbcf-catalog-content\") pod \"5640171a-b010-4e99-8ec3-544201aebbcf\" (UID: \"5640171a-b010-4e99-8ec3-544201aebbcf\") " Dec 05 19:29:40 crc kubenswrapper[4828]: I1205 19:29:40.624229 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpbnw\" (UniqueName: \"kubernetes.io/projected/5640171a-b010-4e99-8ec3-544201aebbcf-kube-api-access-mpbnw\") pod \"5640171a-b010-4e99-8ec3-544201aebbcf\" (UID: \"5640171a-b010-4e99-8ec3-544201aebbcf\") " Dec 05 19:29:40 crc kubenswrapper[4828]: I1205 19:29:40.624467 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5640171a-b010-4e99-8ec3-544201aebbcf-utilities\") pod \"5640171a-b010-4e99-8ec3-544201aebbcf\" (UID: \"5640171a-b010-4e99-8ec3-544201aebbcf\") " Dec 05 19:29:40 crc kubenswrapper[4828]: I1205 19:29:40.627573 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5640171a-b010-4e99-8ec3-544201aebbcf-utilities" (OuterVolumeSpecName: "utilities") pod "5640171a-b010-4e99-8ec3-544201aebbcf" (UID: "5640171a-b010-4e99-8ec3-544201aebbcf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:29:40 crc kubenswrapper[4828]: I1205 19:29:40.633155 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5640171a-b010-4e99-8ec3-544201aebbcf-kube-api-access-mpbnw" (OuterVolumeSpecName: "kube-api-access-mpbnw") pod "5640171a-b010-4e99-8ec3-544201aebbcf" (UID: "5640171a-b010-4e99-8ec3-544201aebbcf"). InnerVolumeSpecName "kube-api-access-mpbnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:29:40 crc kubenswrapper[4828]: I1205 19:29:40.689027 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5640171a-b010-4e99-8ec3-544201aebbcf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5640171a-b010-4e99-8ec3-544201aebbcf" (UID: "5640171a-b010-4e99-8ec3-544201aebbcf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:29:40 crc kubenswrapper[4828]: I1205 19:29:40.726434 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5640171a-b010-4e99-8ec3-544201aebbcf-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 19:29:40 crc kubenswrapper[4828]: I1205 19:29:40.726473 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5640171a-b010-4e99-8ec3-544201aebbcf-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 19:29:40 crc kubenswrapper[4828]: I1205 19:29:40.726487 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpbnw\" (UniqueName: \"kubernetes.io/projected/5640171a-b010-4e99-8ec3-544201aebbcf-kube-api-access-mpbnw\") on node \"crc\" DevicePath \"\"" Dec 05 19:29:41 crc kubenswrapper[4828]: I1205 19:29:41.110809 4828 generic.go:334] "Generic (PLEG): container finished" podID="5640171a-b010-4e99-8ec3-544201aebbcf" containerID="552b90f4d17d90bc319ece345bbf9c07e3cc4f6c7defc8fdcab4bbb566034947" exitCode=0 Dec 05 19:29:41 crc kubenswrapper[4828]: I1205 19:29:41.110872 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wv84v" event={"ID":"5640171a-b010-4e99-8ec3-544201aebbcf","Type":"ContainerDied","Data":"552b90f4d17d90bc319ece345bbf9c07e3cc4f6c7defc8fdcab4bbb566034947"} Dec 05 19:29:41 crc kubenswrapper[4828]: I1205 19:29:41.110908 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wv84v" Dec 05 19:29:41 crc kubenswrapper[4828]: I1205 19:29:41.110933 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wv84v" event={"ID":"5640171a-b010-4e99-8ec3-544201aebbcf","Type":"ContainerDied","Data":"024c0d953e7f1a415e5148d5fb3a13ce2425596624de6765673880c2d2b59e41"} Dec 05 19:29:41 crc kubenswrapper[4828]: I1205 19:29:41.110952 4828 scope.go:117] "RemoveContainer" containerID="552b90f4d17d90bc319ece345bbf9c07e3cc4f6c7defc8fdcab4bbb566034947" Dec 05 19:29:41 crc kubenswrapper[4828]: I1205 19:29:41.137411 4828 scope.go:117] "RemoveContainer" containerID="737068f5ead7c9ec359f0677ab8305b3eea6f29d056da07eec6ab829ca29e5cd" Dec 05 19:29:41 crc kubenswrapper[4828]: I1205 19:29:41.154378 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wv84v"] Dec 05 19:29:41 crc kubenswrapper[4828]: I1205 19:29:41.162896 4828 scope.go:117] "RemoveContainer" containerID="0df1b3d7debf656c526d2e87c34ab46c5d667ec262e559f4470845c0fefdd03b" Dec 05 19:29:41 crc kubenswrapper[4828]: I1205 19:29:41.163396 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wv84v"] Dec 05 19:29:41 crc kubenswrapper[4828]: I1205 19:29:41.225844 4828 scope.go:117] "RemoveContainer" containerID="552b90f4d17d90bc319ece345bbf9c07e3cc4f6c7defc8fdcab4bbb566034947" Dec 05 19:29:41 crc kubenswrapper[4828]: E1205 19:29:41.226374 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"552b90f4d17d90bc319ece345bbf9c07e3cc4f6c7defc8fdcab4bbb566034947\": container with ID starting with 552b90f4d17d90bc319ece345bbf9c07e3cc4f6c7defc8fdcab4bbb566034947 not found: ID does not exist" containerID="552b90f4d17d90bc319ece345bbf9c07e3cc4f6c7defc8fdcab4bbb566034947" Dec 05 19:29:41 crc kubenswrapper[4828]: I1205 19:29:41.226416 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"552b90f4d17d90bc319ece345bbf9c07e3cc4f6c7defc8fdcab4bbb566034947"} err="failed to get container status \"552b90f4d17d90bc319ece345bbf9c07e3cc4f6c7defc8fdcab4bbb566034947\": rpc error: code = NotFound desc = could not find container \"552b90f4d17d90bc319ece345bbf9c07e3cc4f6c7defc8fdcab4bbb566034947\": container with ID starting with 552b90f4d17d90bc319ece345bbf9c07e3cc4f6c7defc8fdcab4bbb566034947 not found: ID does not exist" Dec 05 19:29:41 crc kubenswrapper[4828]: I1205 19:29:41.226442 4828 scope.go:117] "RemoveContainer" containerID="737068f5ead7c9ec359f0677ab8305b3eea6f29d056da07eec6ab829ca29e5cd" Dec 05 19:29:41 crc kubenswrapper[4828]: E1205 19:29:41.226810 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"737068f5ead7c9ec359f0677ab8305b3eea6f29d056da07eec6ab829ca29e5cd\": container with ID starting with 737068f5ead7c9ec359f0677ab8305b3eea6f29d056da07eec6ab829ca29e5cd not found: ID does not exist" containerID="737068f5ead7c9ec359f0677ab8305b3eea6f29d056da07eec6ab829ca29e5cd" Dec 05 19:29:41 crc kubenswrapper[4828]: I1205 19:29:41.226848 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"737068f5ead7c9ec359f0677ab8305b3eea6f29d056da07eec6ab829ca29e5cd"} err="failed to get container status \"737068f5ead7c9ec359f0677ab8305b3eea6f29d056da07eec6ab829ca29e5cd\": rpc error: code = NotFound desc = could not find container \"737068f5ead7c9ec359f0677ab8305b3eea6f29d056da07eec6ab829ca29e5cd\": container with ID starting with 737068f5ead7c9ec359f0677ab8305b3eea6f29d056da07eec6ab829ca29e5cd not found: ID does not exist" Dec 05 19:29:41 crc kubenswrapper[4828]: I1205 19:29:41.226862 4828 scope.go:117] "RemoveContainer" containerID="0df1b3d7debf656c526d2e87c34ab46c5d667ec262e559f4470845c0fefdd03b" Dec 05 19:29:41 crc kubenswrapper[4828]: E1205 19:29:41.227134 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0df1b3d7debf656c526d2e87c34ab46c5d667ec262e559f4470845c0fefdd03b\": container with ID starting with 0df1b3d7debf656c526d2e87c34ab46c5d667ec262e559f4470845c0fefdd03b not found: ID does not exist" containerID="0df1b3d7debf656c526d2e87c34ab46c5d667ec262e559f4470845c0fefdd03b" Dec 05 19:29:41 crc kubenswrapper[4828]: I1205 19:29:41.227166 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0df1b3d7debf656c526d2e87c34ab46c5d667ec262e559f4470845c0fefdd03b"} err="failed to get container status \"0df1b3d7debf656c526d2e87c34ab46c5d667ec262e559f4470845c0fefdd03b\": rpc error: code = NotFound desc = could not find container \"0df1b3d7debf656c526d2e87c34ab46c5d667ec262e559f4470845c0fefdd03b\": container with ID starting with 0df1b3d7debf656c526d2e87c34ab46c5d667ec262e559f4470845c0fefdd03b not found: ID does not exist" Dec 05 19:29:42 crc kubenswrapper[4828]: I1205 19:29:42.462620 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5640171a-b010-4e99-8ec3-544201aebbcf" path="/var/lib/kubelet/pods/5640171a-b010-4e99-8ec3-544201aebbcf/volumes" Dec 05 19:29:43 crc kubenswrapper[4828]: I1205 19:29:43.956068 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.057774 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-mhhlc"] Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.059885 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" podUID="1d80a607-092d-41ff-bc3c-8c8bc08fa239" containerName="dnsmasq-dns" containerID="cri-o://4580f1c0ebf65367e71236b0d234b18d0dc5aad8e806e3b80749859ee8b32b2d" gracePeriod=10 Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.230999 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55478c4467-w4hmm"] Dec 05 19:29:44 crc kubenswrapper[4828]: E1205 19:29:44.231388 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5640171a-b010-4e99-8ec3-544201aebbcf" containerName="extract-content" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.231405 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5640171a-b010-4e99-8ec3-544201aebbcf" containerName="extract-content" Dec 05 19:29:44 crc kubenswrapper[4828]: E1205 19:29:44.232749 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5640171a-b010-4e99-8ec3-544201aebbcf" containerName="registry-server" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.232793 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5640171a-b010-4e99-8ec3-544201aebbcf" containerName="registry-server" Dec 05 19:29:44 crc kubenswrapper[4828]: E1205 19:29:44.232808 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5640171a-b010-4e99-8ec3-544201aebbcf" containerName="extract-utilities" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.232815 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5640171a-b010-4e99-8ec3-544201aebbcf" containerName="extract-utilities" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.233062 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="5640171a-b010-4e99-8ec3-544201aebbcf" containerName="registry-server" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.238240 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-w4hmm" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.253706 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-w4hmm"] Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.312729 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f77804ae-0e68-40a3-bbd8-5dac2e64eedf-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-w4hmm\" (UID: \"f77804ae-0e68-40a3-bbd8-5dac2e64eedf\") " pod="openstack/dnsmasq-dns-55478c4467-w4hmm" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.312784 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh9c5\" (UniqueName: \"kubernetes.io/projected/f77804ae-0e68-40a3-bbd8-5dac2e64eedf-kube-api-access-hh9c5\") pod \"dnsmasq-dns-55478c4467-w4hmm\" (UID: \"f77804ae-0e68-40a3-bbd8-5dac2e64eedf\") " pod="openstack/dnsmasq-dns-55478c4467-w4hmm" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.312879 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f77804ae-0e68-40a3-bbd8-5dac2e64eedf-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-w4hmm\" (UID: \"f77804ae-0e68-40a3-bbd8-5dac2e64eedf\") " pod="openstack/dnsmasq-dns-55478c4467-w4hmm" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.312920 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f77804ae-0e68-40a3-bbd8-5dac2e64eedf-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-w4hmm\" (UID: \"f77804ae-0e68-40a3-bbd8-5dac2e64eedf\") " pod="openstack/dnsmasq-dns-55478c4467-w4hmm" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.312948 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f77804ae-0e68-40a3-bbd8-5dac2e64eedf-config\") pod \"dnsmasq-dns-55478c4467-w4hmm\" (UID: \"f77804ae-0e68-40a3-bbd8-5dac2e64eedf\") " pod="openstack/dnsmasq-dns-55478c4467-w4hmm" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.312993 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f77804ae-0e68-40a3-bbd8-5dac2e64eedf-dns-svc\") pod \"dnsmasq-dns-55478c4467-w4hmm\" (UID: \"f77804ae-0e68-40a3-bbd8-5dac2e64eedf\") " pod="openstack/dnsmasq-dns-55478c4467-w4hmm" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.313154 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f77804ae-0e68-40a3-bbd8-5dac2e64eedf-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-w4hmm\" (UID: \"f77804ae-0e68-40a3-bbd8-5dac2e64eedf\") " pod="openstack/dnsmasq-dns-55478c4467-w4hmm" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.414569 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f77804ae-0e68-40a3-bbd8-5dac2e64eedf-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-w4hmm\" (UID: \"f77804ae-0e68-40a3-bbd8-5dac2e64eedf\") " pod="openstack/dnsmasq-dns-55478c4467-w4hmm" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.414667 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f77804ae-0e68-40a3-bbd8-5dac2e64eedf-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-w4hmm\" (UID: \"f77804ae-0e68-40a3-bbd8-5dac2e64eedf\") " pod="openstack/dnsmasq-dns-55478c4467-w4hmm" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.414696 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh9c5\" (UniqueName: \"kubernetes.io/projected/f77804ae-0e68-40a3-bbd8-5dac2e64eedf-kube-api-access-hh9c5\") pod \"dnsmasq-dns-55478c4467-w4hmm\" (UID: \"f77804ae-0e68-40a3-bbd8-5dac2e64eedf\") " pod="openstack/dnsmasq-dns-55478c4467-w4hmm" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.414723 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f77804ae-0e68-40a3-bbd8-5dac2e64eedf-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-w4hmm\" (UID: \"f77804ae-0e68-40a3-bbd8-5dac2e64eedf\") " pod="openstack/dnsmasq-dns-55478c4467-w4hmm" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.414758 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f77804ae-0e68-40a3-bbd8-5dac2e64eedf-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-w4hmm\" (UID: \"f77804ae-0e68-40a3-bbd8-5dac2e64eedf\") " pod="openstack/dnsmasq-dns-55478c4467-w4hmm" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.414788 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f77804ae-0e68-40a3-bbd8-5dac2e64eedf-config\") pod \"dnsmasq-dns-55478c4467-w4hmm\" (UID: \"f77804ae-0e68-40a3-bbd8-5dac2e64eedf\") " pod="openstack/dnsmasq-dns-55478c4467-w4hmm" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.414841 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f77804ae-0e68-40a3-bbd8-5dac2e64eedf-dns-svc\") pod \"dnsmasq-dns-55478c4467-w4hmm\" (UID: \"f77804ae-0e68-40a3-bbd8-5dac2e64eedf\") " pod="openstack/dnsmasq-dns-55478c4467-w4hmm" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.415466 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f77804ae-0e68-40a3-bbd8-5dac2e64eedf-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-w4hmm\" (UID: \"f77804ae-0e68-40a3-bbd8-5dac2e64eedf\") " pod="openstack/dnsmasq-dns-55478c4467-w4hmm" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.415859 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f77804ae-0e68-40a3-bbd8-5dac2e64eedf-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-w4hmm\" (UID: \"f77804ae-0e68-40a3-bbd8-5dac2e64eedf\") " pod="openstack/dnsmasq-dns-55478c4467-w4hmm" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.415904 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f77804ae-0e68-40a3-bbd8-5dac2e64eedf-dns-svc\") pod \"dnsmasq-dns-55478c4467-w4hmm\" (UID: \"f77804ae-0e68-40a3-bbd8-5dac2e64eedf\") " pod="openstack/dnsmasq-dns-55478c4467-w4hmm" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.416090 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f77804ae-0e68-40a3-bbd8-5dac2e64eedf-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-w4hmm\" (UID: \"f77804ae-0e68-40a3-bbd8-5dac2e64eedf\") " pod="openstack/dnsmasq-dns-55478c4467-w4hmm" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.416214 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f77804ae-0e68-40a3-bbd8-5dac2e64eedf-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-w4hmm\" (UID: \"f77804ae-0e68-40a3-bbd8-5dac2e64eedf\") " pod="openstack/dnsmasq-dns-55478c4467-w4hmm" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.416531 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f77804ae-0e68-40a3-bbd8-5dac2e64eedf-config\") pod \"dnsmasq-dns-55478c4467-w4hmm\" (UID: \"f77804ae-0e68-40a3-bbd8-5dac2e64eedf\") " pod="openstack/dnsmasq-dns-55478c4467-w4hmm" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.437286 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh9c5\" (UniqueName: \"kubernetes.io/projected/f77804ae-0e68-40a3-bbd8-5dac2e64eedf-kube-api-access-hh9c5\") pod \"dnsmasq-dns-55478c4467-w4hmm\" (UID: \"f77804ae-0e68-40a3-bbd8-5dac2e64eedf\") " pod="openstack/dnsmasq-dns-55478c4467-w4hmm" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.592597 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-w4hmm" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.725405 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.825707 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d80a607-092d-41ff-bc3c-8c8bc08fa239-ovsdbserver-sb\") pod \"1d80a607-092d-41ff-bc3c-8c8bc08fa239\" (UID: \"1d80a607-092d-41ff-bc3c-8c8bc08fa239\") " Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.825867 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d80a607-092d-41ff-bc3c-8c8bc08fa239-ovsdbserver-nb\") pod \"1d80a607-092d-41ff-bc3c-8c8bc08fa239\" (UID: \"1d80a607-092d-41ff-bc3c-8c8bc08fa239\") " Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.825890 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d80a607-092d-41ff-bc3c-8c8bc08fa239-dns-swift-storage-0\") pod \"1d80a607-092d-41ff-bc3c-8c8bc08fa239\" (UID: \"1d80a607-092d-41ff-bc3c-8c8bc08fa239\") " Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.825967 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d80a607-092d-41ff-bc3c-8c8bc08fa239-dns-svc\") pod \"1d80a607-092d-41ff-bc3c-8c8bc08fa239\" (UID: \"1d80a607-092d-41ff-bc3c-8c8bc08fa239\") " Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.826055 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d80a607-092d-41ff-bc3c-8c8bc08fa239-config\") pod \"1d80a607-092d-41ff-bc3c-8c8bc08fa239\" (UID: \"1d80a607-092d-41ff-bc3c-8c8bc08fa239\") " Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.826088 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flt8b\" (UniqueName: \"kubernetes.io/projected/1d80a607-092d-41ff-bc3c-8c8bc08fa239-kube-api-access-flt8b\") pod \"1d80a607-092d-41ff-bc3c-8c8bc08fa239\" (UID: \"1d80a607-092d-41ff-bc3c-8c8bc08fa239\") " Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.848011 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d80a607-092d-41ff-bc3c-8c8bc08fa239-kube-api-access-flt8b" (OuterVolumeSpecName: "kube-api-access-flt8b") pod "1d80a607-092d-41ff-bc3c-8c8bc08fa239" (UID: "1d80a607-092d-41ff-bc3c-8c8bc08fa239"). InnerVolumeSpecName "kube-api-access-flt8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.928938 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flt8b\" (UniqueName: \"kubernetes.io/projected/1d80a607-092d-41ff-bc3c-8c8bc08fa239-kube-api-access-flt8b\") on node \"crc\" DevicePath \"\"" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.934702 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d80a607-092d-41ff-bc3c-8c8bc08fa239-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1d80a607-092d-41ff-bc3c-8c8bc08fa239" (UID: "1d80a607-092d-41ff-bc3c-8c8bc08fa239"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.949392 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d80a607-092d-41ff-bc3c-8c8bc08fa239-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1d80a607-092d-41ff-bc3c-8c8bc08fa239" (UID: "1d80a607-092d-41ff-bc3c-8c8bc08fa239"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.966678 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d80a607-092d-41ff-bc3c-8c8bc08fa239-config" (OuterVolumeSpecName: "config") pod "1d80a607-092d-41ff-bc3c-8c8bc08fa239" (UID: "1d80a607-092d-41ff-bc3c-8c8bc08fa239"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:29:44 crc kubenswrapper[4828]: I1205 19:29:44.981948 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d80a607-092d-41ff-bc3c-8c8bc08fa239-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1d80a607-092d-41ff-bc3c-8c8bc08fa239" (UID: "1d80a607-092d-41ff-bc3c-8c8bc08fa239"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:29:45 crc kubenswrapper[4828]: I1205 19:29:45.001394 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d80a607-092d-41ff-bc3c-8c8bc08fa239-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1d80a607-092d-41ff-bc3c-8c8bc08fa239" (UID: "1d80a607-092d-41ff-bc3c-8c8bc08fa239"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:29:45 crc kubenswrapper[4828]: I1205 19:29:45.031192 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d80a607-092d-41ff-bc3c-8c8bc08fa239-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 19:29:45 crc kubenswrapper[4828]: I1205 19:29:45.031220 4828 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d80a607-092d-41ff-bc3c-8c8bc08fa239-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 19:29:45 crc kubenswrapper[4828]: I1205 19:29:45.031230 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d80a607-092d-41ff-bc3c-8c8bc08fa239-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 19:29:45 crc kubenswrapper[4828]: I1205 19:29:45.031238 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d80a607-092d-41ff-bc3c-8c8bc08fa239-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:29:45 crc kubenswrapper[4828]: I1205 19:29:45.031246 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d80a607-092d-41ff-bc3c-8c8bc08fa239-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 19:29:45 crc kubenswrapper[4828]: I1205 19:29:45.172472 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-w4hmm"] Dec 05 19:29:45 crc kubenswrapper[4828]: I1205 19:29:45.189744 4828 generic.go:334] "Generic (PLEG): container finished" podID="1d80a607-092d-41ff-bc3c-8c8bc08fa239" containerID="4580f1c0ebf65367e71236b0d234b18d0dc5aad8e806e3b80749859ee8b32b2d" exitCode=0 Dec 05 19:29:45 crc kubenswrapper[4828]: I1205 19:29:45.189811 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" event={"ID":"1d80a607-092d-41ff-bc3c-8c8bc08fa239","Type":"ContainerDied","Data":"4580f1c0ebf65367e71236b0d234b18d0dc5aad8e806e3b80749859ee8b32b2d"} Dec 05 19:29:45 crc kubenswrapper[4828]: I1205 19:29:45.189861 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" event={"ID":"1d80a607-092d-41ff-bc3c-8c8bc08fa239","Type":"ContainerDied","Data":"b89734fda1eff7b794039404206c0b388a45441ac7de0359a2afde454e34136e"} Dec 05 19:29:45 crc kubenswrapper[4828]: I1205 19:29:45.189881 4828 scope.go:117] "RemoveContainer" containerID="4580f1c0ebf65367e71236b0d234b18d0dc5aad8e806e3b80749859ee8b32b2d" Dec 05 19:29:45 crc kubenswrapper[4828]: I1205 19:29:45.190026 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-mhhlc" Dec 05 19:29:45 crc kubenswrapper[4828]: I1205 19:29:45.191249 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-w4hmm" event={"ID":"f77804ae-0e68-40a3-bbd8-5dac2e64eedf","Type":"ContainerStarted","Data":"6f5751f18e25e83f166cd52ed2f65b47d0f8786bf759416dbdee3dcf47c8e7e6"} Dec 05 19:29:45 crc kubenswrapper[4828]: I1205 19:29:45.356127 4828 scope.go:117] "RemoveContainer" containerID="ff5c42483254c42ebecdefe1784dcabaaf8d24925c7b380d43c4dcf5e083c1b3" Dec 05 19:29:45 crc kubenswrapper[4828]: I1205 19:29:45.382943 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-mhhlc"] Dec 05 19:29:45 crc kubenswrapper[4828]: I1205 19:29:45.385081 4828 scope.go:117] "RemoveContainer" containerID="4580f1c0ebf65367e71236b0d234b18d0dc5aad8e806e3b80749859ee8b32b2d" Dec 05 19:29:45 crc kubenswrapper[4828]: E1205 19:29:45.386285 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4580f1c0ebf65367e71236b0d234b18d0dc5aad8e806e3b80749859ee8b32b2d\": container with ID starting with 4580f1c0ebf65367e71236b0d234b18d0dc5aad8e806e3b80749859ee8b32b2d not found: ID does not exist" containerID="4580f1c0ebf65367e71236b0d234b18d0dc5aad8e806e3b80749859ee8b32b2d" Dec 05 19:29:45 crc kubenswrapper[4828]: I1205 19:29:45.386314 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4580f1c0ebf65367e71236b0d234b18d0dc5aad8e806e3b80749859ee8b32b2d"} err="failed to get container status \"4580f1c0ebf65367e71236b0d234b18d0dc5aad8e806e3b80749859ee8b32b2d\": rpc error: code = NotFound desc = could not find container \"4580f1c0ebf65367e71236b0d234b18d0dc5aad8e806e3b80749859ee8b32b2d\": container with ID starting with 4580f1c0ebf65367e71236b0d234b18d0dc5aad8e806e3b80749859ee8b32b2d not found: ID does not exist" Dec 05 19:29:45 crc kubenswrapper[4828]: I1205 19:29:45.386336 4828 scope.go:117] "RemoveContainer" containerID="ff5c42483254c42ebecdefe1784dcabaaf8d24925c7b380d43c4dcf5e083c1b3" Dec 05 19:29:45 crc kubenswrapper[4828]: E1205 19:29:45.386839 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff5c42483254c42ebecdefe1784dcabaaf8d24925c7b380d43c4dcf5e083c1b3\": container with ID starting with ff5c42483254c42ebecdefe1784dcabaaf8d24925c7b380d43c4dcf5e083c1b3 not found: ID does not exist" containerID="ff5c42483254c42ebecdefe1784dcabaaf8d24925c7b380d43c4dcf5e083c1b3" Dec 05 19:29:45 crc kubenswrapper[4828]: I1205 19:29:45.386868 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff5c42483254c42ebecdefe1784dcabaaf8d24925c7b380d43c4dcf5e083c1b3"} err="failed to get container status \"ff5c42483254c42ebecdefe1784dcabaaf8d24925c7b380d43c4dcf5e083c1b3\": rpc error: code = NotFound desc = could not find container \"ff5c42483254c42ebecdefe1784dcabaaf8d24925c7b380d43c4dcf5e083c1b3\": container with ID starting with ff5c42483254c42ebecdefe1784dcabaaf8d24925c7b380d43c4dcf5e083c1b3 not found: ID does not exist" Dec 05 19:29:45 crc kubenswrapper[4828]: I1205 19:29:45.396317 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-mhhlc"] Dec 05 19:29:46 crc kubenswrapper[4828]: I1205 19:29:46.204032 4828 generic.go:334] "Generic (PLEG): container finished" podID="f77804ae-0e68-40a3-bbd8-5dac2e64eedf" containerID="a306ae6d254d75741dd299d3c565beef45779ddcbc8af91aec5ea4d83e6677f2" exitCode=0 Dec 05 19:29:46 crc kubenswrapper[4828]: I1205 19:29:46.204096 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-w4hmm" event={"ID":"f77804ae-0e68-40a3-bbd8-5dac2e64eedf","Type":"ContainerDied","Data":"a306ae6d254d75741dd299d3c565beef45779ddcbc8af91aec5ea4d83e6677f2"} Dec 05 19:29:46 crc kubenswrapper[4828]: I1205 19:29:46.466648 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d80a607-092d-41ff-bc3c-8c8bc08fa239" path="/var/lib/kubelet/pods/1d80a607-092d-41ff-bc3c-8c8bc08fa239/volumes" Dec 05 19:29:47 crc kubenswrapper[4828]: I1205 19:29:47.216205 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-w4hmm" event={"ID":"f77804ae-0e68-40a3-bbd8-5dac2e64eedf","Type":"ContainerStarted","Data":"7e653085502aa4558afa0dc71c755e551dc1c43c13de3c7548a47c57604ce3ae"} Dec 05 19:29:47 crc kubenswrapper[4828]: I1205 19:29:47.224093 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55478c4467-w4hmm" Dec 05 19:29:47 crc kubenswrapper[4828]: I1205 19:29:47.256783 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55478c4467-w4hmm" podStartSLOduration=3.256760479 podStartE2EDuration="3.256760479s" podCreationTimestamp="2025-12-05 19:29:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:29:47.244679182 +0000 UTC m=+1565.139901508" watchObservedRunningTime="2025-12-05 19:29:47.256760479 +0000 UTC m=+1565.151982785" Dec 05 19:29:47 crc kubenswrapper[4828]: I1205 19:29:47.446742 4828 scope.go:117] "RemoveContainer" containerID="44273427ab956efcfe69105cf20e92501e73b81a7dc35341e1cfc9b1dd7be2f7" Dec 05 19:29:47 crc kubenswrapper[4828]: E1205 19:29:47.447090 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:29:53 crc kubenswrapper[4828]: I1205 19:29:53.249877 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mlnrn"] Dec 05 19:29:53 crc kubenswrapper[4828]: E1205 19:29:53.250779 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d80a607-092d-41ff-bc3c-8c8bc08fa239" containerName="init" Dec 05 19:29:53 crc kubenswrapper[4828]: I1205 19:29:53.250792 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d80a607-092d-41ff-bc3c-8c8bc08fa239" containerName="init" Dec 05 19:29:53 crc kubenswrapper[4828]: E1205 19:29:53.250832 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d80a607-092d-41ff-bc3c-8c8bc08fa239" containerName="dnsmasq-dns" Dec 05 19:29:53 crc kubenswrapper[4828]: I1205 19:29:53.250851 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d80a607-092d-41ff-bc3c-8c8bc08fa239" containerName="dnsmasq-dns" Dec 05 19:29:53 crc kubenswrapper[4828]: I1205 19:29:53.251025 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d80a607-092d-41ff-bc3c-8c8bc08fa239" containerName="dnsmasq-dns" Dec 05 19:29:53 crc kubenswrapper[4828]: I1205 19:29:53.252564 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mlnrn" Dec 05 19:29:53 crc kubenswrapper[4828]: I1205 19:29:53.268587 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mlnrn"] Dec 05 19:29:53 crc kubenswrapper[4828]: I1205 19:29:53.315070 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvfxw\" (UniqueName: \"kubernetes.io/projected/4d52aa20-1852-43a9-93da-80a67a764fcd-kube-api-access-dvfxw\") pod \"community-operators-mlnrn\" (UID: \"4d52aa20-1852-43a9-93da-80a67a764fcd\") " pod="openshift-marketplace/community-operators-mlnrn" Dec 05 19:29:53 crc kubenswrapper[4828]: I1205 19:29:53.315311 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d52aa20-1852-43a9-93da-80a67a764fcd-catalog-content\") pod \"community-operators-mlnrn\" (UID: \"4d52aa20-1852-43a9-93da-80a67a764fcd\") " pod="openshift-marketplace/community-operators-mlnrn" Dec 05 19:29:53 crc kubenswrapper[4828]: I1205 19:29:53.315380 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d52aa20-1852-43a9-93da-80a67a764fcd-utilities\") pod \"community-operators-mlnrn\" (UID: \"4d52aa20-1852-43a9-93da-80a67a764fcd\") " pod="openshift-marketplace/community-operators-mlnrn" Dec 05 19:29:53 crc kubenswrapper[4828]: I1205 19:29:53.417163 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvfxw\" (UniqueName: \"kubernetes.io/projected/4d52aa20-1852-43a9-93da-80a67a764fcd-kube-api-access-dvfxw\") pod \"community-operators-mlnrn\" (UID: \"4d52aa20-1852-43a9-93da-80a67a764fcd\") " pod="openshift-marketplace/community-operators-mlnrn" Dec 05 19:29:53 crc kubenswrapper[4828]: I1205 19:29:53.417290 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d52aa20-1852-43a9-93da-80a67a764fcd-catalog-content\") pod \"community-operators-mlnrn\" (UID: \"4d52aa20-1852-43a9-93da-80a67a764fcd\") " pod="openshift-marketplace/community-operators-mlnrn" Dec 05 19:29:53 crc kubenswrapper[4828]: I1205 19:29:53.417328 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d52aa20-1852-43a9-93da-80a67a764fcd-utilities\") pod \"community-operators-mlnrn\" (UID: \"4d52aa20-1852-43a9-93da-80a67a764fcd\") " pod="openshift-marketplace/community-operators-mlnrn" Dec 05 19:29:53 crc kubenswrapper[4828]: I1205 19:29:53.418054 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d52aa20-1852-43a9-93da-80a67a764fcd-utilities\") pod \"community-operators-mlnrn\" (UID: \"4d52aa20-1852-43a9-93da-80a67a764fcd\") " pod="openshift-marketplace/community-operators-mlnrn" Dec 05 19:29:53 crc kubenswrapper[4828]: I1205 19:29:53.418057 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d52aa20-1852-43a9-93da-80a67a764fcd-catalog-content\") pod \"community-operators-mlnrn\" (UID: \"4d52aa20-1852-43a9-93da-80a67a764fcd\") " pod="openshift-marketplace/community-operators-mlnrn" Dec 05 19:29:53 crc kubenswrapper[4828]: I1205 19:29:53.451802 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvfxw\" (UniqueName: \"kubernetes.io/projected/4d52aa20-1852-43a9-93da-80a67a764fcd-kube-api-access-dvfxw\") pod \"community-operators-mlnrn\" (UID: \"4d52aa20-1852-43a9-93da-80a67a764fcd\") " pod="openshift-marketplace/community-operators-mlnrn" Dec 05 19:29:53 crc kubenswrapper[4828]: I1205 19:29:53.578473 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mlnrn" Dec 05 19:29:54 crc kubenswrapper[4828]: I1205 19:29:54.141183 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mlnrn"] Dec 05 19:29:54 crc kubenswrapper[4828]: I1205 19:29:54.289873 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlnrn" event={"ID":"4d52aa20-1852-43a9-93da-80a67a764fcd","Type":"ContainerStarted","Data":"b84b8542ff3d7a7cf31489bdd261749ac5de0c13a83fcb058b93627c710d10de"} Dec 05 19:29:54 crc kubenswrapper[4828]: I1205 19:29:54.597422 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55478c4467-w4hmm" Dec 05 19:29:54 crc kubenswrapper[4828]: I1205 19:29:54.650175 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-z65kd"] Dec 05 19:29:54 crc kubenswrapper[4828]: I1205 19:29:54.650496 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" podUID="177ab816-ea50-4af9-add2-9e671249b133" containerName="dnsmasq-dns" containerID="cri-o://1cc818f16aa820b07fbb99935fd8b4628e7a697a0085b13631d7e2f90b2997e7" gracePeriod=10 Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.182772 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.267116 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjz8v\" (UniqueName: \"kubernetes.io/projected/177ab816-ea50-4af9-add2-9e671249b133-kube-api-access-zjz8v\") pod \"177ab816-ea50-4af9-add2-9e671249b133\" (UID: \"177ab816-ea50-4af9-add2-9e671249b133\") " Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.267204 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-openstack-edpm-ipam\") pod \"177ab816-ea50-4af9-add2-9e671249b133\" (UID: \"177ab816-ea50-4af9-add2-9e671249b133\") " Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.267241 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-ovsdbserver-nb\") pod \"177ab816-ea50-4af9-add2-9e671249b133\" (UID: \"177ab816-ea50-4af9-add2-9e671249b133\") " Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.267264 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-ovsdbserver-sb\") pod \"177ab816-ea50-4af9-add2-9e671249b133\" (UID: \"177ab816-ea50-4af9-add2-9e671249b133\") " Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.267398 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-dns-svc\") pod \"177ab816-ea50-4af9-add2-9e671249b133\" (UID: \"177ab816-ea50-4af9-add2-9e671249b133\") " Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.267426 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-dns-swift-storage-0\") pod \"177ab816-ea50-4af9-add2-9e671249b133\" (UID: \"177ab816-ea50-4af9-add2-9e671249b133\") " Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.267531 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-config\") pod \"177ab816-ea50-4af9-add2-9e671249b133\" (UID: \"177ab816-ea50-4af9-add2-9e671249b133\") " Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.273603 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/177ab816-ea50-4af9-add2-9e671249b133-kube-api-access-zjz8v" (OuterVolumeSpecName: "kube-api-access-zjz8v") pod "177ab816-ea50-4af9-add2-9e671249b133" (UID: "177ab816-ea50-4af9-add2-9e671249b133"). InnerVolumeSpecName "kube-api-access-zjz8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.302653 4828 generic.go:334] "Generic (PLEG): container finished" podID="4d52aa20-1852-43a9-93da-80a67a764fcd" containerID="855868e000c251f4e04fb153c54a522bf9e50dc772a8c82347abc34b2300d0f7" exitCode=0 Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.302731 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlnrn" event={"ID":"4d52aa20-1852-43a9-93da-80a67a764fcd","Type":"ContainerDied","Data":"855868e000c251f4e04fb153c54a522bf9e50dc772a8c82347abc34b2300d0f7"} Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.307559 4828 generic.go:334] "Generic (PLEG): container finished" podID="177ab816-ea50-4af9-add2-9e671249b133" containerID="1cc818f16aa820b07fbb99935fd8b4628e7a697a0085b13631d7e2f90b2997e7" exitCode=0 Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.308137 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.307613 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" event={"ID":"177ab816-ea50-4af9-add2-9e671249b133","Type":"ContainerDied","Data":"1cc818f16aa820b07fbb99935fd8b4628e7a697a0085b13631d7e2f90b2997e7"} Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.310013 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-z65kd" event={"ID":"177ab816-ea50-4af9-add2-9e671249b133","Type":"ContainerDied","Data":"617eb0a8b5095e4fd4ac5f564059396d2d000e7e19cfe37eb1c3d5b00e575fd2"} Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.310038 4828 scope.go:117] "RemoveContainer" containerID="1cc818f16aa820b07fbb99935fd8b4628e7a697a0085b13631d7e2f90b2997e7" Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.327872 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "177ab816-ea50-4af9-add2-9e671249b133" (UID: "177ab816-ea50-4af9-add2-9e671249b133"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.341926 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "177ab816-ea50-4af9-add2-9e671249b133" (UID: "177ab816-ea50-4af9-add2-9e671249b133"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.343664 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "177ab816-ea50-4af9-add2-9e671249b133" (UID: "177ab816-ea50-4af9-add2-9e671249b133"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.353094 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "177ab816-ea50-4af9-add2-9e671249b133" (UID: "177ab816-ea50-4af9-add2-9e671249b133"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.356876 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "177ab816-ea50-4af9-add2-9e671249b133" (UID: "177ab816-ea50-4af9-add2-9e671249b133"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.358944 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-config" (OuterVolumeSpecName: "config") pod "177ab816-ea50-4af9-add2-9e671249b133" (UID: "177ab816-ea50-4af9-add2-9e671249b133"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.369785 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjz8v\" (UniqueName: \"kubernetes.io/projected/177ab816-ea50-4af9-add2-9e671249b133-kube-api-access-zjz8v\") on node \"crc\" DevicePath \"\"" Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.369811 4828 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.369846 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.369857 4828 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.369867 4828 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.369876 4828 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.369885 4828 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/177ab816-ea50-4af9-add2-9e671249b133-config\") on node \"crc\" DevicePath \"\"" Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.397872 4828 scope.go:117] "RemoveContainer" containerID="d1aa029b40479f3cd230a03df83eac74ec3b126845a794080f6d5e10a7c4df8e" Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.421048 4828 scope.go:117] "RemoveContainer" containerID="1cc818f16aa820b07fbb99935fd8b4628e7a697a0085b13631d7e2f90b2997e7" Dec 05 19:29:55 crc kubenswrapper[4828]: E1205 19:29:55.421470 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cc818f16aa820b07fbb99935fd8b4628e7a697a0085b13631d7e2f90b2997e7\": container with ID starting with 1cc818f16aa820b07fbb99935fd8b4628e7a697a0085b13631d7e2f90b2997e7 not found: ID does not exist" containerID="1cc818f16aa820b07fbb99935fd8b4628e7a697a0085b13631d7e2f90b2997e7" Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.421531 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cc818f16aa820b07fbb99935fd8b4628e7a697a0085b13631d7e2f90b2997e7"} err="failed to get container status \"1cc818f16aa820b07fbb99935fd8b4628e7a697a0085b13631d7e2f90b2997e7\": rpc error: code = NotFound desc = could not find container \"1cc818f16aa820b07fbb99935fd8b4628e7a697a0085b13631d7e2f90b2997e7\": container with ID starting with 1cc818f16aa820b07fbb99935fd8b4628e7a697a0085b13631d7e2f90b2997e7 not found: ID does not exist" Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.421557 4828 scope.go:117] "RemoveContainer" containerID="d1aa029b40479f3cd230a03df83eac74ec3b126845a794080f6d5e10a7c4df8e" Dec 05 19:29:55 crc kubenswrapper[4828]: E1205 19:29:55.421936 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1aa029b40479f3cd230a03df83eac74ec3b126845a794080f6d5e10a7c4df8e\": container with ID starting with d1aa029b40479f3cd230a03df83eac74ec3b126845a794080f6d5e10a7c4df8e not found: ID does not exist" containerID="d1aa029b40479f3cd230a03df83eac74ec3b126845a794080f6d5e10a7c4df8e" Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.421986 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1aa029b40479f3cd230a03df83eac74ec3b126845a794080f6d5e10a7c4df8e"} err="failed to get container status \"d1aa029b40479f3cd230a03df83eac74ec3b126845a794080f6d5e10a7c4df8e\": rpc error: code = NotFound desc = could not find container \"d1aa029b40479f3cd230a03df83eac74ec3b126845a794080f6d5e10a7c4df8e\": container with ID starting with d1aa029b40479f3cd230a03df83eac74ec3b126845a794080f6d5e10a7c4df8e not found: ID does not exist" Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.645624 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-z65kd"] Dec 05 19:29:55 crc kubenswrapper[4828]: I1205 19:29:55.657662 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-z65kd"] Dec 05 19:29:56 crc kubenswrapper[4828]: I1205 19:29:56.317954 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlnrn" event={"ID":"4d52aa20-1852-43a9-93da-80a67a764fcd","Type":"ContainerStarted","Data":"d77ff179adfd0f89dfc99eb9efd374b48906cadacacdef61715dcaf5e5f128b9"} Dec 05 19:29:56 crc kubenswrapper[4828]: I1205 19:29:56.468619 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="177ab816-ea50-4af9-add2-9e671249b133" path="/var/lib/kubelet/pods/177ab816-ea50-4af9-add2-9e671249b133/volumes" Dec 05 19:29:57 crc kubenswrapper[4828]: I1205 19:29:57.332089 4828 generic.go:334] "Generic (PLEG): container finished" podID="4d52aa20-1852-43a9-93da-80a67a764fcd" containerID="d77ff179adfd0f89dfc99eb9efd374b48906cadacacdef61715dcaf5e5f128b9" exitCode=0 Dec 05 19:29:57 crc kubenswrapper[4828]: I1205 19:29:57.332178 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlnrn" event={"ID":"4d52aa20-1852-43a9-93da-80a67a764fcd","Type":"ContainerDied","Data":"d77ff179adfd0f89dfc99eb9efd374b48906cadacacdef61715dcaf5e5f128b9"} Dec 05 19:29:58 crc kubenswrapper[4828]: I1205 19:29:58.349328 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlnrn" event={"ID":"4d52aa20-1852-43a9-93da-80a67a764fcd","Type":"ContainerStarted","Data":"f876cc080faf9697cb1be914f48c72a81250a9452472a4ea4d342bad2b7a2fd6"} Dec 05 19:29:58 crc kubenswrapper[4828]: I1205 19:29:58.369451 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mlnrn" podStartSLOduration=2.961988789 podStartE2EDuration="5.369434413s" podCreationTimestamp="2025-12-05 19:29:53 +0000 UTC" firstStartedPulling="2025-12-05 19:29:55.305437117 +0000 UTC m=+1573.200659423" lastFinishedPulling="2025-12-05 19:29:57.712882741 +0000 UTC m=+1575.608105047" observedRunningTime="2025-12-05 19:29:58.366506304 +0000 UTC m=+1576.261728620" watchObservedRunningTime="2025-12-05 19:29:58.369434413 +0000 UTC m=+1576.264656719" Dec 05 19:29:58 crc kubenswrapper[4828]: I1205 19:29:58.447485 4828 scope.go:117] "RemoveContainer" containerID="44273427ab956efcfe69105cf20e92501e73b81a7dc35341e1cfc9b1dd7be2f7" Dec 05 19:29:58 crc kubenswrapper[4828]: E1205 19:29:58.447878 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:30:00 crc kubenswrapper[4828]: I1205 19:30:00.141209 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416050-kp4hz"] Dec 05 19:30:00 crc kubenswrapper[4828]: E1205 19:30:00.142051 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177ab816-ea50-4af9-add2-9e671249b133" containerName="init" Dec 05 19:30:00 crc kubenswrapper[4828]: I1205 19:30:00.142069 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="177ab816-ea50-4af9-add2-9e671249b133" containerName="init" Dec 05 19:30:00 crc kubenswrapper[4828]: E1205 19:30:00.142082 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177ab816-ea50-4af9-add2-9e671249b133" containerName="dnsmasq-dns" Dec 05 19:30:00 crc kubenswrapper[4828]: I1205 19:30:00.142089 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="177ab816-ea50-4af9-add2-9e671249b133" containerName="dnsmasq-dns" Dec 05 19:30:00 crc kubenswrapper[4828]: I1205 19:30:00.142323 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="177ab816-ea50-4af9-add2-9e671249b133" containerName="dnsmasq-dns" Dec 05 19:30:00 crc kubenswrapper[4828]: I1205 19:30:00.143170 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416050-kp4hz" Dec 05 19:30:00 crc kubenswrapper[4828]: I1205 19:30:00.145431 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 19:30:00 crc kubenswrapper[4828]: I1205 19:30:00.152847 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 19:30:00 crc kubenswrapper[4828]: I1205 19:30:00.159057 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416050-kp4hz"] Dec 05 19:30:00 crc kubenswrapper[4828]: I1205 19:30:00.281521 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9c7g\" (UniqueName: \"kubernetes.io/projected/37f504a5-cc40-4bc6-9f02-4400004a3dce-kube-api-access-n9c7g\") pod \"collect-profiles-29416050-kp4hz\" (UID: \"37f504a5-cc40-4bc6-9f02-4400004a3dce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416050-kp4hz" Dec 05 19:30:00 crc kubenswrapper[4828]: I1205 19:30:00.281905 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37f504a5-cc40-4bc6-9f02-4400004a3dce-config-volume\") pod \"collect-profiles-29416050-kp4hz\" (UID: \"37f504a5-cc40-4bc6-9f02-4400004a3dce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416050-kp4hz" Dec 05 19:30:00 crc kubenswrapper[4828]: I1205 19:30:00.282014 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37f504a5-cc40-4bc6-9f02-4400004a3dce-secret-volume\") pod \"collect-profiles-29416050-kp4hz\" (UID: \"37f504a5-cc40-4bc6-9f02-4400004a3dce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416050-kp4hz" Dec 05 19:30:00 crc kubenswrapper[4828]: I1205 19:30:00.383607 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9c7g\" (UniqueName: \"kubernetes.io/projected/37f504a5-cc40-4bc6-9f02-4400004a3dce-kube-api-access-n9c7g\") pod \"collect-profiles-29416050-kp4hz\" (UID: \"37f504a5-cc40-4bc6-9f02-4400004a3dce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416050-kp4hz" Dec 05 19:30:00 crc kubenswrapper[4828]: I1205 19:30:00.383692 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37f504a5-cc40-4bc6-9f02-4400004a3dce-config-volume\") pod \"collect-profiles-29416050-kp4hz\" (UID: \"37f504a5-cc40-4bc6-9f02-4400004a3dce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416050-kp4hz" Dec 05 19:30:00 crc kubenswrapper[4828]: I1205 19:30:00.383791 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37f504a5-cc40-4bc6-9f02-4400004a3dce-secret-volume\") pod \"collect-profiles-29416050-kp4hz\" (UID: \"37f504a5-cc40-4bc6-9f02-4400004a3dce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416050-kp4hz" Dec 05 19:30:00 crc kubenswrapper[4828]: I1205 19:30:00.384725 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37f504a5-cc40-4bc6-9f02-4400004a3dce-config-volume\") pod \"collect-profiles-29416050-kp4hz\" (UID: \"37f504a5-cc40-4bc6-9f02-4400004a3dce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416050-kp4hz" Dec 05 19:30:00 crc kubenswrapper[4828]: I1205 19:30:00.391541 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37f504a5-cc40-4bc6-9f02-4400004a3dce-secret-volume\") pod \"collect-profiles-29416050-kp4hz\" (UID: \"37f504a5-cc40-4bc6-9f02-4400004a3dce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416050-kp4hz" Dec 05 19:30:00 crc kubenswrapper[4828]: I1205 19:30:00.400097 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9c7g\" (UniqueName: \"kubernetes.io/projected/37f504a5-cc40-4bc6-9f02-4400004a3dce-kube-api-access-n9c7g\") pod \"collect-profiles-29416050-kp4hz\" (UID: \"37f504a5-cc40-4bc6-9f02-4400004a3dce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416050-kp4hz" Dec 05 19:30:00 crc kubenswrapper[4828]: I1205 19:30:00.465644 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416050-kp4hz" Dec 05 19:30:00 crc kubenswrapper[4828]: I1205 19:30:00.916144 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416050-kp4hz"] Dec 05 19:30:00 crc kubenswrapper[4828]: W1205 19:30:00.926981 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37f504a5_cc40_4bc6_9f02_4400004a3dce.slice/crio-f49167d4a8919d5a6409022754c364ed929887cbe8decb9671d7cdd2b4f25676 WatchSource:0}: Error finding container f49167d4a8919d5a6409022754c364ed929887cbe8decb9671d7cdd2b4f25676: Status 404 returned error can't find the container with id f49167d4a8919d5a6409022754c364ed929887cbe8decb9671d7cdd2b4f25676 Dec 05 19:30:01 crc kubenswrapper[4828]: I1205 19:30:01.380552 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416050-kp4hz" event={"ID":"37f504a5-cc40-4bc6-9f02-4400004a3dce","Type":"ContainerStarted","Data":"f49167d4a8919d5a6409022754c364ed929887cbe8decb9671d7cdd2b4f25676"} Dec 05 19:30:02 crc kubenswrapper[4828]: I1205 19:30:02.390788 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416050-kp4hz" event={"ID":"37f504a5-cc40-4bc6-9f02-4400004a3dce","Type":"ContainerStarted","Data":"7ef5599f084d42eb8f185c7f070e44c8b8cec70e0fba9aaff5d1ed28c881ce7d"} Dec 05 19:30:02 crc kubenswrapper[4828]: I1205 19:30:02.431021 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29416050-kp4hz" podStartSLOduration=2.430997123 podStartE2EDuration="2.430997123s" podCreationTimestamp="2025-12-05 19:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:30:02.410357677 +0000 UTC m=+1580.305579973" watchObservedRunningTime="2025-12-05 19:30:02.430997123 +0000 UTC m=+1580.326219439" Dec 05 19:30:03 crc kubenswrapper[4828]: I1205 19:30:03.405053 4828 generic.go:334] "Generic (PLEG): container finished" podID="37f504a5-cc40-4bc6-9f02-4400004a3dce" containerID="7ef5599f084d42eb8f185c7f070e44c8b8cec70e0fba9aaff5d1ed28c881ce7d" exitCode=0 Dec 05 19:30:03 crc kubenswrapper[4828]: I1205 19:30:03.405164 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416050-kp4hz" event={"ID":"37f504a5-cc40-4bc6-9f02-4400004a3dce","Type":"ContainerDied","Data":"7ef5599f084d42eb8f185c7f070e44c8b8cec70e0fba9aaff5d1ed28c881ce7d"} Dec 05 19:30:03 crc kubenswrapper[4828]: I1205 19:30:03.579136 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mlnrn" Dec 05 19:30:03 crc kubenswrapper[4828]: I1205 19:30:03.579187 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mlnrn" Dec 05 19:30:03 crc kubenswrapper[4828]: I1205 19:30:03.639862 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mlnrn" Dec 05 19:30:04 crc kubenswrapper[4828]: I1205 19:30:04.473260 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mlnrn" Dec 05 19:30:04 crc kubenswrapper[4828]: I1205 19:30:04.526077 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mlnrn"] Dec 05 19:30:04 crc kubenswrapper[4828]: I1205 19:30:04.771335 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416050-kp4hz" Dec 05 19:30:04 crc kubenswrapper[4828]: I1205 19:30:04.877472 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37f504a5-cc40-4bc6-9f02-4400004a3dce-secret-volume\") pod \"37f504a5-cc40-4bc6-9f02-4400004a3dce\" (UID: \"37f504a5-cc40-4bc6-9f02-4400004a3dce\") " Dec 05 19:30:04 crc kubenswrapper[4828]: I1205 19:30:04.877607 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9c7g\" (UniqueName: \"kubernetes.io/projected/37f504a5-cc40-4bc6-9f02-4400004a3dce-kube-api-access-n9c7g\") pod \"37f504a5-cc40-4bc6-9f02-4400004a3dce\" (UID: \"37f504a5-cc40-4bc6-9f02-4400004a3dce\") " Dec 05 19:30:04 crc kubenswrapper[4828]: I1205 19:30:04.877818 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37f504a5-cc40-4bc6-9f02-4400004a3dce-config-volume\") pod \"37f504a5-cc40-4bc6-9f02-4400004a3dce\" (UID: \"37f504a5-cc40-4bc6-9f02-4400004a3dce\") " Dec 05 19:30:04 crc kubenswrapper[4828]: I1205 19:30:04.878452 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37f504a5-cc40-4bc6-9f02-4400004a3dce-config-volume" (OuterVolumeSpecName: "config-volume") pod "37f504a5-cc40-4bc6-9f02-4400004a3dce" (UID: "37f504a5-cc40-4bc6-9f02-4400004a3dce"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:30:04 crc kubenswrapper[4828]: I1205 19:30:04.887115 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37f504a5-cc40-4bc6-9f02-4400004a3dce-kube-api-access-n9c7g" (OuterVolumeSpecName: "kube-api-access-n9c7g") pod "37f504a5-cc40-4bc6-9f02-4400004a3dce" (UID: "37f504a5-cc40-4bc6-9f02-4400004a3dce"). InnerVolumeSpecName "kube-api-access-n9c7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:30:04 crc kubenswrapper[4828]: I1205 19:30:04.887997 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37f504a5-cc40-4bc6-9f02-4400004a3dce-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "37f504a5-cc40-4bc6-9f02-4400004a3dce" (UID: "37f504a5-cc40-4bc6-9f02-4400004a3dce"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:30:04 crc kubenswrapper[4828]: I1205 19:30:04.980303 4828 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37f504a5-cc40-4bc6-9f02-4400004a3dce-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 19:30:04 crc kubenswrapper[4828]: I1205 19:30:04.980340 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9c7g\" (UniqueName: \"kubernetes.io/projected/37f504a5-cc40-4bc6-9f02-4400004a3dce-kube-api-access-n9c7g\") on node \"crc\" DevicePath \"\"" Dec 05 19:30:04 crc kubenswrapper[4828]: I1205 19:30:04.980354 4828 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37f504a5-cc40-4bc6-9f02-4400004a3dce-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 19:30:05 crc kubenswrapper[4828]: I1205 19:30:05.425123 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416050-kp4hz" event={"ID":"37f504a5-cc40-4bc6-9f02-4400004a3dce","Type":"ContainerDied","Data":"f49167d4a8919d5a6409022754c364ed929887cbe8decb9671d7cdd2b4f25676"} Dec 05 19:30:05 crc kubenswrapper[4828]: I1205 19:30:05.425419 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f49167d4a8919d5a6409022754c364ed929887cbe8decb9671d7cdd2b4f25676" Dec 05 19:30:05 crc kubenswrapper[4828]: I1205 19:30:05.425124 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416050-kp4hz" Dec 05 19:30:06 crc kubenswrapper[4828]: I1205 19:30:06.446450 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mlnrn" podUID="4d52aa20-1852-43a9-93da-80a67a764fcd" containerName="registry-server" containerID="cri-o://f876cc080faf9697cb1be914f48c72a81250a9452472a4ea4d342bad2b7a2fd6" gracePeriod=2 Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.080026 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mlnrn" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.098176 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvfxw\" (UniqueName: \"kubernetes.io/projected/4d52aa20-1852-43a9-93da-80a67a764fcd-kube-api-access-dvfxw\") pod \"4d52aa20-1852-43a9-93da-80a67a764fcd\" (UID: \"4d52aa20-1852-43a9-93da-80a67a764fcd\") " Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.098270 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d52aa20-1852-43a9-93da-80a67a764fcd-catalog-content\") pod \"4d52aa20-1852-43a9-93da-80a67a764fcd\" (UID: \"4d52aa20-1852-43a9-93da-80a67a764fcd\") " Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.098318 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d52aa20-1852-43a9-93da-80a67a764fcd-utilities\") pod \"4d52aa20-1852-43a9-93da-80a67a764fcd\" (UID: \"4d52aa20-1852-43a9-93da-80a67a764fcd\") " Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.099734 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d52aa20-1852-43a9-93da-80a67a764fcd-utilities" (OuterVolumeSpecName: "utilities") pod "4d52aa20-1852-43a9-93da-80a67a764fcd" (UID: "4d52aa20-1852-43a9-93da-80a67a764fcd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.107762 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d52aa20-1852-43a9-93da-80a67a764fcd-kube-api-access-dvfxw" (OuterVolumeSpecName: "kube-api-access-dvfxw") pod "4d52aa20-1852-43a9-93da-80a67a764fcd" (UID: "4d52aa20-1852-43a9-93da-80a67a764fcd"). InnerVolumeSpecName "kube-api-access-dvfxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.177119 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d52aa20-1852-43a9-93da-80a67a764fcd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d52aa20-1852-43a9-93da-80a67a764fcd" (UID: "4d52aa20-1852-43a9-93da-80a67a764fcd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.201106 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvfxw\" (UniqueName: \"kubernetes.io/projected/4d52aa20-1852-43a9-93da-80a67a764fcd-kube-api-access-dvfxw\") on node \"crc\" DevicePath \"\"" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.201365 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d52aa20-1852-43a9-93da-80a67a764fcd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.201430 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d52aa20-1852-43a9-93da-80a67a764fcd-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.455926 4828 generic.go:334] "Generic (PLEG): container finished" podID="4d52aa20-1852-43a9-93da-80a67a764fcd" containerID="f876cc080faf9697cb1be914f48c72a81250a9452472a4ea4d342bad2b7a2fd6" exitCode=0 Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.456069 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mlnrn" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.457033 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlnrn" event={"ID":"4d52aa20-1852-43a9-93da-80a67a764fcd","Type":"ContainerDied","Data":"f876cc080faf9697cb1be914f48c72a81250a9452472a4ea4d342bad2b7a2fd6"} Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.457131 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlnrn" event={"ID":"4d52aa20-1852-43a9-93da-80a67a764fcd","Type":"ContainerDied","Data":"b84b8542ff3d7a7cf31489bdd261749ac5de0c13a83fcb058b93627c710d10de"} Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.457208 4828 scope.go:117] "RemoveContainer" containerID="f876cc080faf9697cb1be914f48c72a81250a9452472a4ea4d342bad2b7a2fd6" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.476430 4828 scope.go:117] "RemoveContainer" containerID="d77ff179adfd0f89dfc99eb9efd374b48906cadacacdef61715dcaf5e5f128b9" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.502052 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mlnrn"] Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.512233 4828 scope.go:117] "RemoveContainer" containerID="855868e000c251f4e04fb153c54a522bf9e50dc772a8c82347abc34b2300d0f7" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.513191 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mlnrn"] Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.570457 4828 scope.go:117] "RemoveContainer" containerID="f876cc080faf9697cb1be914f48c72a81250a9452472a4ea4d342bad2b7a2fd6" Dec 05 19:30:07 crc kubenswrapper[4828]: E1205 19:30:07.572104 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f876cc080faf9697cb1be914f48c72a81250a9452472a4ea4d342bad2b7a2fd6\": container with ID starting with f876cc080faf9697cb1be914f48c72a81250a9452472a4ea4d342bad2b7a2fd6 not found: ID does not exist" containerID="f876cc080faf9697cb1be914f48c72a81250a9452472a4ea4d342bad2b7a2fd6" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.572146 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f876cc080faf9697cb1be914f48c72a81250a9452472a4ea4d342bad2b7a2fd6"} err="failed to get container status \"f876cc080faf9697cb1be914f48c72a81250a9452472a4ea4d342bad2b7a2fd6\": rpc error: code = NotFound desc = could not find container \"f876cc080faf9697cb1be914f48c72a81250a9452472a4ea4d342bad2b7a2fd6\": container with ID starting with f876cc080faf9697cb1be914f48c72a81250a9452472a4ea4d342bad2b7a2fd6 not found: ID does not exist" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.572175 4828 scope.go:117] "RemoveContainer" containerID="d77ff179adfd0f89dfc99eb9efd374b48906cadacacdef61715dcaf5e5f128b9" Dec 05 19:30:07 crc kubenswrapper[4828]: E1205 19:30:07.576092 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d77ff179adfd0f89dfc99eb9efd374b48906cadacacdef61715dcaf5e5f128b9\": container with ID starting with d77ff179adfd0f89dfc99eb9efd374b48906cadacacdef61715dcaf5e5f128b9 not found: ID does not exist" containerID="d77ff179adfd0f89dfc99eb9efd374b48906cadacacdef61715dcaf5e5f128b9" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.576141 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d77ff179adfd0f89dfc99eb9efd374b48906cadacacdef61715dcaf5e5f128b9"} err="failed to get container status \"d77ff179adfd0f89dfc99eb9efd374b48906cadacacdef61715dcaf5e5f128b9\": rpc error: code = NotFound desc = could not find container \"d77ff179adfd0f89dfc99eb9efd374b48906cadacacdef61715dcaf5e5f128b9\": container with ID starting with d77ff179adfd0f89dfc99eb9efd374b48906cadacacdef61715dcaf5e5f128b9 not found: ID does not exist" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.576172 4828 scope.go:117] "RemoveContainer" containerID="855868e000c251f4e04fb153c54a522bf9e50dc772a8c82347abc34b2300d0f7" Dec 05 19:30:07 crc kubenswrapper[4828]: E1205 19:30:07.576735 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"855868e000c251f4e04fb153c54a522bf9e50dc772a8c82347abc34b2300d0f7\": container with ID starting with 855868e000c251f4e04fb153c54a522bf9e50dc772a8c82347abc34b2300d0f7 not found: ID does not exist" containerID="855868e000c251f4e04fb153c54a522bf9e50dc772a8c82347abc34b2300d0f7" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.576761 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"855868e000c251f4e04fb153c54a522bf9e50dc772a8c82347abc34b2300d0f7"} err="failed to get container status \"855868e000c251f4e04fb153c54a522bf9e50dc772a8c82347abc34b2300d0f7\": rpc error: code = NotFound desc = could not find container \"855868e000c251f4e04fb153c54a522bf9e50dc772a8c82347abc34b2300d0f7\": container with ID starting with 855868e000c251f4e04fb153c54a522bf9e50dc772a8c82347abc34b2300d0f7 not found: ID does not exist" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.579685 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss"] Dec 05 19:30:07 crc kubenswrapper[4828]: E1205 19:30:07.580063 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37f504a5-cc40-4bc6-9f02-4400004a3dce" containerName="collect-profiles" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.580079 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="37f504a5-cc40-4bc6-9f02-4400004a3dce" containerName="collect-profiles" Dec 05 19:30:07 crc kubenswrapper[4828]: E1205 19:30:07.580104 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d52aa20-1852-43a9-93da-80a67a764fcd" containerName="extract-content" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.580111 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d52aa20-1852-43a9-93da-80a67a764fcd" containerName="extract-content" Dec 05 19:30:07 crc kubenswrapper[4828]: E1205 19:30:07.580129 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d52aa20-1852-43a9-93da-80a67a764fcd" containerName="registry-server" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.580136 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d52aa20-1852-43a9-93da-80a67a764fcd" containerName="registry-server" Dec 05 19:30:07 crc kubenswrapper[4828]: E1205 19:30:07.580151 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d52aa20-1852-43a9-93da-80a67a764fcd" containerName="extract-utilities" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.580157 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d52aa20-1852-43a9-93da-80a67a764fcd" containerName="extract-utilities" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.580354 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="37f504a5-cc40-4bc6-9f02-4400004a3dce" containerName="collect-profiles" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.580376 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d52aa20-1852-43a9-93da-80a67a764fcd" containerName="registry-server" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.581217 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.583542 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.583774 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.583977 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9rkjj" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.584125 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.591310 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss"] Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.608669 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a2df868b-dc23-4623-9203-42c91c9ff35b-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss\" (UID: \"a2df868b-dc23-4623-9203-42c91c9ff35b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.608728 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2df868b-dc23-4623-9203-42c91c9ff35b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss\" (UID: \"a2df868b-dc23-4623-9203-42c91c9ff35b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.608793 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2df868b-dc23-4623-9203-42c91c9ff35b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss\" (UID: \"a2df868b-dc23-4623-9203-42c91c9ff35b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.608846 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cckth\" (UniqueName: \"kubernetes.io/projected/a2df868b-dc23-4623-9203-42c91c9ff35b-kube-api-access-cckth\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss\" (UID: \"a2df868b-dc23-4623-9203-42c91c9ff35b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.710769 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2df868b-dc23-4623-9203-42c91c9ff35b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss\" (UID: \"a2df868b-dc23-4623-9203-42c91c9ff35b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.710903 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2df868b-dc23-4623-9203-42c91c9ff35b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss\" (UID: \"a2df868b-dc23-4623-9203-42c91c9ff35b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.710970 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cckth\" (UniqueName: \"kubernetes.io/projected/a2df868b-dc23-4623-9203-42c91c9ff35b-kube-api-access-cckth\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss\" (UID: \"a2df868b-dc23-4623-9203-42c91c9ff35b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.711090 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a2df868b-dc23-4623-9203-42c91c9ff35b-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss\" (UID: \"a2df868b-dc23-4623-9203-42c91c9ff35b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.714877 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a2df868b-dc23-4623-9203-42c91c9ff35b-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss\" (UID: \"a2df868b-dc23-4623-9203-42c91c9ff35b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.714987 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2df868b-dc23-4623-9203-42c91c9ff35b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss\" (UID: \"a2df868b-dc23-4623-9203-42c91c9ff35b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.716663 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2df868b-dc23-4623-9203-42c91c9ff35b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss\" (UID: \"a2df868b-dc23-4623-9203-42c91c9ff35b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.729883 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cckth\" (UniqueName: \"kubernetes.io/projected/a2df868b-dc23-4623-9203-42c91c9ff35b-kube-api-access-cckth\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss\" (UID: \"a2df868b-dc23-4623-9203-42c91c9ff35b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss" Dec 05 19:30:07 crc kubenswrapper[4828]: I1205 19:30:07.921132 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss" Dec 05 19:30:08 crc kubenswrapper[4828]: W1205 19:30:08.456743 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2df868b_dc23_4623_9203_42c91c9ff35b.slice/crio-614b4eb7cba3fb3e0616aeec71d9345f76048bca05578bfe88a84970176ec8cf WatchSource:0}: Error finding container 614b4eb7cba3fb3e0616aeec71d9345f76048bca05578bfe88a84970176ec8cf: Status 404 returned error can't find the container with id 614b4eb7cba3fb3e0616aeec71d9345f76048bca05578bfe88a84970176ec8cf Dec 05 19:30:08 crc kubenswrapper[4828]: I1205 19:30:08.457966 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d52aa20-1852-43a9-93da-80a67a764fcd" path="/var/lib/kubelet/pods/4d52aa20-1852-43a9-93da-80a67a764fcd/volumes" Dec 05 19:30:08 crc kubenswrapper[4828]: I1205 19:30:08.459124 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss"] Dec 05 19:30:09 crc kubenswrapper[4828]: I1205 19:30:09.484083 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss" event={"ID":"a2df868b-dc23-4623-9203-42c91c9ff35b","Type":"ContainerStarted","Data":"614b4eb7cba3fb3e0616aeec71d9345f76048bca05578bfe88a84970176ec8cf"} Dec 05 19:30:12 crc kubenswrapper[4828]: I1205 19:30:12.454740 4828 scope.go:117] "RemoveContainer" containerID="44273427ab956efcfe69105cf20e92501e73b81a7dc35341e1cfc9b1dd7be2f7" Dec 05 19:30:12 crc kubenswrapper[4828]: E1205 19:30:12.455323 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:30:20 crc kubenswrapper[4828]: I1205 19:30:20.600725 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss" event={"ID":"a2df868b-dc23-4623-9203-42c91c9ff35b","Type":"ContainerStarted","Data":"d3459d40f40e442a5da0ce74ff44655359405f117962255b5cda0c6f436a5f16"} Dec 05 19:30:20 crc kubenswrapper[4828]: I1205 19:30:20.635127 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss" podStartSLOduration=2.183392851 podStartE2EDuration="13.635107382s" podCreationTimestamp="2025-12-05 19:30:07 +0000 UTC" firstStartedPulling="2025-12-05 19:30:08.461537894 +0000 UTC m=+1586.356760200" lastFinishedPulling="2025-12-05 19:30:19.913252425 +0000 UTC m=+1597.808474731" observedRunningTime="2025-12-05 19:30:20.620342783 +0000 UTC m=+1598.515565129" watchObservedRunningTime="2025-12-05 19:30:20.635107382 +0000 UTC m=+1598.530329698" Dec 05 19:30:27 crc kubenswrapper[4828]: I1205 19:30:27.446888 4828 scope.go:117] "RemoveContainer" containerID="44273427ab956efcfe69105cf20e92501e73b81a7dc35341e1cfc9b1dd7be2f7" Dec 05 19:30:27 crc kubenswrapper[4828]: E1205 19:30:27.447935 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:30:31 crc kubenswrapper[4828]: I1205 19:30:31.394640 4828 scope.go:117] "RemoveContainer" containerID="948bf1168f1fe5e8ef0204bb17b2697b5a80db6cb78befbc95abf38a7ba890c7" Dec 05 19:30:31 crc kubenswrapper[4828]: I1205 19:30:31.420874 4828 scope.go:117] "RemoveContainer" containerID="d33743109ee3f46be42225d0b4190a263fee6dc17dc4aa055bc08ba905559bbb" Dec 05 19:30:31 crc kubenswrapper[4828]: I1205 19:30:31.444544 4828 scope.go:117] "RemoveContainer" containerID="0262f380f91c94c1037da4a9a9d49de5b716f87161a7cf28c03238b52e61335a" Dec 05 19:30:32 crc kubenswrapper[4828]: I1205 19:30:32.716382 4828 generic.go:334] "Generic (PLEG): container finished" podID="a2df868b-dc23-4623-9203-42c91c9ff35b" containerID="d3459d40f40e442a5da0ce74ff44655359405f117962255b5cda0c6f436a5f16" exitCode=0 Dec 05 19:30:32 crc kubenswrapper[4828]: I1205 19:30:32.716460 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss" event={"ID":"a2df868b-dc23-4623-9203-42c91c9ff35b","Type":"ContainerDied","Data":"d3459d40f40e442a5da0ce74ff44655359405f117962255b5cda0c6f436a5f16"} Dec 05 19:30:34 crc kubenswrapper[4828]: I1205 19:30:34.177085 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss" Dec 05 19:30:34 crc kubenswrapper[4828]: I1205 19:30:34.334893 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2df868b-dc23-4623-9203-42c91c9ff35b-repo-setup-combined-ca-bundle\") pod \"a2df868b-dc23-4623-9203-42c91c9ff35b\" (UID: \"a2df868b-dc23-4623-9203-42c91c9ff35b\") " Dec 05 19:30:34 crc kubenswrapper[4828]: I1205 19:30:34.335143 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2df868b-dc23-4623-9203-42c91c9ff35b-inventory\") pod \"a2df868b-dc23-4623-9203-42c91c9ff35b\" (UID: \"a2df868b-dc23-4623-9203-42c91c9ff35b\") " Dec 05 19:30:34 crc kubenswrapper[4828]: I1205 19:30:34.335168 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a2df868b-dc23-4623-9203-42c91c9ff35b-ssh-key\") pod \"a2df868b-dc23-4623-9203-42c91c9ff35b\" (UID: \"a2df868b-dc23-4623-9203-42c91c9ff35b\") " Dec 05 19:30:34 crc kubenswrapper[4828]: I1205 19:30:34.335225 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cckth\" (UniqueName: \"kubernetes.io/projected/a2df868b-dc23-4623-9203-42c91c9ff35b-kube-api-access-cckth\") pod \"a2df868b-dc23-4623-9203-42c91c9ff35b\" (UID: \"a2df868b-dc23-4623-9203-42c91c9ff35b\") " Dec 05 19:30:34 crc kubenswrapper[4828]: I1205 19:30:34.341556 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2df868b-dc23-4623-9203-42c91c9ff35b-kube-api-access-cckth" (OuterVolumeSpecName: "kube-api-access-cckth") pod "a2df868b-dc23-4623-9203-42c91c9ff35b" (UID: "a2df868b-dc23-4623-9203-42c91c9ff35b"). InnerVolumeSpecName "kube-api-access-cckth". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:30:34 crc kubenswrapper[4828]: I1205 19:30:34.349637 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2df868b-dc23-4623-9203-42c91c9ff35b-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "a2df868b-dc23-4623-9203-42c91c9ff35b" (UID: "a2df868b-dc23-4623-9203-42c91c9ff35b"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:30:34 crc kubenswrapper[4828]: I1205 19:30:34.364219 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2df868b-dc23-4623-9203-42c91c9ff35b-inventory" (OuterVolumeSpecName: "inventory") pod "a2df868b-dc23-4623-9203-42c91c9ff35b" (UID: "a2df868b-dc23-4623-9203-42c91c9ff35b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:30:34 crc kubenswrapper[4828]: I1205 19:30:34.368866 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2df868b-dc23-4623-9203-42c91c9ff35b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a2df868b-dc23-4623-9203-42c91c9ff35b" (UID: "a2df868b-dc23-4623-9203-42c91c9ff35b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:30:34 crc kubenswrapper[4828]: I1205 19:30:34.437171 4828 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2df868b-dc23-4623-9203-42c91c9ff35b-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:30:34 crc kubenswrapper[4828]: I1205 19:30:34.437455 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2df868b-dc23-4623-9203-42c91c9ff35b-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 19:30:34 crc kubenswrapper[4828]: I1205 19:30:34.437465 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a2df868b-dc23-4623-9203-42c91c9ff35b-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 19:30:34 crc kubenswrapper[4828]: I1205 19:30:34.437474 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cckth\" (UniqueName: \"kubernetes.io/projected/a2df868b-dc23-4623-9203-42c91c9ff35b-kube-api-access-cckth\") on node \"crc\" DevicePath \"\"" Dec 05 19:30:34 crc kubenswrapper[4828]: I1205 19:30:34.739552 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss" event={"ID":"a2df868b-dc23-4623-9203-42c91c9ff35b","Type":"ContainerDied","Data":"614b4eb7cba3fb3e0616aeec71d9345f76048bca05578bfe88a84970176ec8cf"} Dec 05 19:30:34 crc kubenswrapper[4828]: I1205 19:30:34.739614 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="614b4eb7cba3fb3e0616aeec71d9345f76048bca05578bfe88a84970176ec8cf" Dec 05 19:30:34 crc kubenswrapper[4828]: I1205 19:30:34.739630 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss" Dec 05 19:30:34 crc kubenswrapper[4828]: I1205 19:30:34.818845 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-5fhnz"] Dec 05 19:30:34 crc kubenswrapper[4828]: E1205 19:30:34.819248 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2df868b-dc23-4623-9203-42c91c9ff35b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 05 19:30:34 crc kubenswrapper[4828]: I1205 19:30:34.819266 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2df868b-dc23-4623-9203-42c91c9ff35b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 05 19:30:34 crc kubenswrapper[4828]: I1205 19:30:34.819474 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2df868b-dc23-4623-9203-42c91c9ff35b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 05 19:30:34 crc kubenswrapper[4828]: I1205 19:30:34.820319 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5fhnz" Dec 05 19:30:34 crc kubenswrapper[4828]: I1205 19:30:34.824437 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 19:30:34 crc kubenswrapper[4828]: I1205 19:30:34.824670 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 19:30:34 crc kubenswrapper[4828]: I1205 19:30:34.825652 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9rkjj" Dec 05 19:30:34 crc kubenswrapper[4828]: I1205 19:30:34.825845 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 19:30:34 crc kubenswrapper[4828]: I1205 19:30:34.828748 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-5fhnz"] Dec 05 19:30:34 crc kubenswrapper[4828]: I1205 19:30:34.946665 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/096e625a-8244-411f-aaad-9746cf1e1878-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5fhnz\" (UID: \"096e625a-8244-411f-aaad-9746cf1e1878\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5fhnz" Dec 05 19:30:34 crc kubenswrapper[4828]: I1205 19:30:34.946979 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pndjg\" (UniqueName: \"kubernetes.io/projected/096e625a-8244-411f-aaad-9746cf1e1878-kube-api-access-pndjg\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5fhnz\" (UID: \"096e625a-8244-411f-aaad-9746cf1e1878\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5fhnz" Dec 05 19:30:34 crc kubenswrapper[4828]: I1205 19:30:34.947687 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/096e625a-8244-411f-aaad-9746cf1e1878-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5fhnz\" (UID: \"096e625a-8244-411f-aaad-9746cf1e1878\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5fhnz" Dec 05 19:30:35 crc kubenswrapper[4828]: I1205 19:30:35.049276 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/096e625a-8244-411f-aaad-9746cf1e1878-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5fhnz\" (UID: \"096e625a-8244-411f-aaad-9746cf1e1878\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5fhnz" Dec 05 19:30:35 crc kubenswrapper[4828]: I1205 19:30:35.049403 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pndjg\" (UniqueName: \"kubernetes.io/projected/096e625a-8244-411f-aaad-9746cf1e1878-kube-api-access-pndjg\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5fhnz\" (UID: \"096e625a-8244-411f-aaad-9746cf1e1878\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5fhnz" Dec 05 19:30:35 crc kubenswrapper[4828]: I1205 19:30:35.049468 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/096e625a-8244-411f-aaad-9746cf1e1878-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5fhnz\" (UID: \"096e625a-8244-411f-aaad-9746cf1e1878\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5fhnz" Dec 05 19:30:35 crc kubenswrapper[4828]: I1205 19:30:35.054552 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/096e625a-8244-411f-aaad-9746cf1e1878-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5fhnz\" (UID: \"096e625a-8244-411f-aaad-9746cf1e1878\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5fhnz" Dec 05 19:30:35 crc kubenswrapper[4828]: I1205 19:30:35.057317 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/096e625a-8244-411f-aaad-9746cf1e1878-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5fhnz\" (UID: \"096e625a-8244-411f-aaad-9746cf1e1878\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5fhnz" Dec 05 19:30:35 crc kubenswrapper[4828]: I1205 19:30:35.078486 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pndjg\" (UniqueName: \"kubernetes.io/projected/096e625a-8244-411f-aaad-9746cf1e1878-kube-api-access-pndjg\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5fhnz\" (UID: \"096e625a-8244-411f-aaad-9746cf1e1878\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5fhnz" Dec 05 19:30:35 crc kubenswrapper[4828]: I1205 19:30:35.138734 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5fhnz" Dec 05 19:30:35 crc kubenswrapper[4828]: W1205 19:30:35.618994 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod096e625a_8244_411f_aaad_9746cf1e1878.slice/crio-abb8ed07fbccf2206248d5effd5df06975af0e3dfb30cecc75a8b0537d05d686 WatchSource:0}: Error finding container abb8ed07fbccf2206248d5effd5df06975af0e3dfb30cecc75a8b0537d05d686: Status 404 returned error can't find the container with id abb8ed07fbccf2206248d5effd5df06975af0e3dfb30cecc75a8b0537d05d686 Dec 05 19:30:35 crc kubenswrapper[4828]: I1205 19:30:35.619361 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-5fhnz"] Dec 05 19:30:35 crc kubenswrapper[4828]: I1205 19:30:35.749739 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5fhnz" event={"ID":"096e625a-8244-411f-aaad-9746cf1e1878","Type":"ContainerStarted","Data":"abb8ed07fbccf2206248d5effd5df06975af0e3dfb30cecc75a8b0537d05d686"} Dec 05 19:30:36 crc kubenswrapper[4828]: I1205 19:30:36.760917 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5fhnz" event={"ID":"096e625a-8244-411f-aaad-9746cf1e1878","Type":"ContainerStarted","Data":"4cf6f445ac6932a401be722db6d39afcc19874bd91a13bb0c68d8ff04c995fad"} Dec 05 19:30:39 crc kubenswrapper[4828]: I1205 19:30:39.786799 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5fhnz" event={"ID":"096e625a-8244-411f-aaad-9746cf1e1878","Type":"ContainerDied","Data":"4cf6f445ac6932a401be722db6d39afcc19874bd91a13bb0c68d8ff04c995fad"} Dec 05 19:30:39 crc kubenswrapper[4828]: I1205 19:30:39.786698 4828 generic.go:334] "Generic (PLEG): container finished" podID="096e625a-8244-411f-aaad-9746cf1e1878" containerID="4cf6f445ac6932a401be722db6d39afcc19874bd91a13bb0c68d8ff04c995fad" exitCode=0 Dec 05 19:30:41 crc kubenswrapper[4828]: I1205 19:30:41.216440 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5fhnz" Dec 05 19:30:41 crc kubenswrapper[4828]: I1205 19:30:41.274480 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/096e625a-8244-411f-aaad-9746cf1e1878-ssh-key\") pod \"096e625a-8244-411f-aaad-9746cf1e1878\" (UID: \"096e625a-8244-411f-aaad-9746cf1e1878\") " Dec 05 19:30:41 crc kubenswrapper[4828]: I1205 19:30:41.274780 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pndjg\" (UniqueName: \"kubernetes.io/projected/096e625a-8244-411f-aaad-9746cf1e1878-kube-api-access-pndjg\") pod \"096e625a-8244-411f-aaad-9746cf1e1878\" (UID: \"096e625a-8244-411f-aaad-9746cf1e1878\") " Dec 05 19:30:41 crc kubenswrapper[4828]: I1205 19:30:41.274817 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/096e625a-8244-411f-aaad-9746cf1e1878-inventory\") pod \"096e625a-8244-411f-aaad-9746cf1e1878\" (UID: \"096e625a-8244-411f-aaad-9746cf1e1878\") " Dec 05 19:30:41 crc kubenswrapper[4828]: I1205 19:30:41.282043 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/096e625a-8244-411f-aaad-9746cf1e1878-kube-api-access-pndjg" (OuterVolumeSpecName: "kube-api-access-pndjg") pod "096e625a-8244-411f-aaad-9746cf1e1878" (UID: "096e625a-8244-411f-aaad-9746cf1e1878"). InnerVolumeSpecName "kube-api-access-pndjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:30:41 crc kubenswrapper[4828]: I1205 19:30:41.303518 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/096e625a-8244-411f-aaad-9746cf1e1878-inventory" (OuterVolumeSpecName: "inventory") pod "096e625a-8244-411f-aaad-9746cf1e1878" (UID: "096e625a-8244-411f-aaad-9746cf1e1878"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:30:41 crc kubenswrapper[4828]: I1205 19:30:41.306731 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/096e625a-8244-411f-aaad-9746cf1e1878-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "096e625a-8244-411f-aaad-9746cf1e1878" (UID: "096e625a-8244-411f-aaad-9746cf1e1878"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:30:41 crc kubenswrapper[4828]: I1205 19:30:41.377156 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/096e625a-8244-411f-aaad-9746cf1e1878-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 19:30:41 crc kubenswrapper[4828]: I1205 19:30:41.377188 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pndjg\" (UniqueName: \"kubernetes.io/projected/096e625a-8244-411f-aaad-9746cf1e1878-kube-api-access-pndjg\") on node \"crc\" DevicePath \"\"" Dec 05 19:30:41 crc kubenswrapper[4828]: I1205 19:30:41.377226 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/096e625a-8244-411f-aaad-9746cf1e1878-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 19:30:41 crc kubenswrapper[4828]: I1205 19:30:41.446874 4828 scope.go:117] "RemoveContainer" containerID="44273427ab956efcfe69105cf20e92501e73b81a7dc35341e1cfc9b1dd7be2f7" Dec 05 19:30:41 crc kubenswrapper[4828]: E1205 19:30:41.447164 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:30:41 crc kubenswrapper[4828]: I1205 19:30:41.813006 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5fhnz" event={"ID":"096e625a-8244-411f-aaad-9746cf1e1878","Type":"ContainerDied","Data":"abb8ed07fbccf2206248d5effd5df06975af0e3dfb30cecc75a8b0537d05d686"} Dec 05 19:30:41 crc kubenswrapper[4828]: I1205 19:30:41.813058 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abb8ed07fbccf2206248d5effd5df06975af0e3dfb30cecc75a8b0537d05d686" Dec 05 19:30:41 crc kubenswrapper[4828]: I1205 19:30:41.813112 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5fhnz" Dec 05 19:30:41 crc kubenswrapper[4828]: I1205 19:30:41.878694 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr"] Dec 05 19:30:41 crc kubenswrapper[4828]: E1205 19:30:41.879286 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="096e625a-8244-411f-aaad-9746cf1e1878" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 05 19:30:41 crc kubenswrapper[4828]: I1205 19:30:41.879312 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="096e625a-8244-411f-aaad-9746cf1e1878" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 05 19:30:41 crc kubenswrapper[4828]: I1205 19:30:41.879551 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="096e625a-8244-411f-aaad-9746cf1e1878" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 05 19:30:41 crc kubenswrapper[4828]: I1205 19:30:41.880429 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr" Dec 05 19:30:41 crc kubenswrapper[4828]: I1205 19:30:41.883966 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 19:30:41 crc kubenswrapper[4828]: I1205 19:30:41.884613 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 19:30:41 crc kubenswrapper[4828]: I1205 19:30:41.884724 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9rkjj" Dec 05 19:30:41 crc kubenswrapper[4828]: I1205 19:30:41.885123 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 19:30:41 crc kubenswrapper[4828]: I1205 19:30:41.894899 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr"] Dec 05 19:30:41 crc kubenswrapper[4828]: I1205 19:30:41.985265 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f959e321-6568-4dd3-8c87-0ebb49d9c517-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr\" (UID: \"f959e321-6568-4dd3-8c87-0ebb49d9c517\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr" Dec 05 19:30:41 crc kubenswrapper[4828]: I1205 19:30:41.985328 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f959e321-6568-4dd3-8c87-0ebb49d9c517-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr\" (UID: \"f959e321-6568-4dd3-8c87-0ebb49d9c517\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr" Dec 05 19:30:41 crc kubenswrapper[4828]: I1205 19:30:41.985365 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f959e321-6568-4dd3-8c87-0ebb49d9c517-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr\" (UID: \"f959e321-6568-4dd3-8c87-0ebb49d9c517\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr" Dec 05 19:30:41 crc kubenswrapper[4828]: I1205 19:30:41.985520 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4d9c\" (UniqueName: \"kubernetes.io/projected/f959e321-6568-4dd3-8c87-0ebb49d9c517-kube-api-access-j4d9c\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr\" (UID: \"f959e321-6568-4dd3-8c87-0ebb49d9c517\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr" Dec 05 19:30:42 crc kubenswrapper[4828]: I1205 19:30:42.087235 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f959e321-6568-4dd3-8c87-0ebb49d9c517-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr\" (UID: \"f959e321-6568-4dd3-8c87-0ebb49d9c517\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr" Dec 05 19:30:42 crc kubenswrapper[4828]: I1205 19:30:42.087329 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f959e321-6568-4dd3-8c87-0ebb49d9c517-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr\" (UID: \"f959e321-6568-4dd3-8c87-0ebb49d9c517\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr" Dec 05 19:30:42 crc kubenswrapper[4828]: I1205 19:30:42.087374 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f959e321-6568-4dd3-8c87-0ebb49d9c517-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr\" (UID: \"f959e321-6568-4dd3-8c87-0ebb49d9c517\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr" Dec 05 19:30:42 crc kubenswrapper[4828]: I1205 19:30:42.087429 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4d9c\" (UniqueName: \"kubernetes.io/projected/f959e321-6568-4dd3-8c87-0ebb49d9c517-kube-api-access-j4d9c\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr\" (UID: \"f959e321-6568-4dd3-8c87-0ebb49d9c517\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr" Dec 05 19:30:42 crc kubenswrapper[4828]: I1205 19:30:42.092952 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f959e321-6568-4dd3-8c87-0ebb49d9c517-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr\" (UID: \"f959e321-6568-4dd3-8c87-0ebb49d9c517\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr" Dec 05 19:30:42 crc kubenswrapper[4828]: I1205 19:30:42.093466 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f959e321-6568-4dd3-8c87-0ebb49d9c517-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr\" (UID: \"f959e321-6568-4dd3-8c87-0ebb49d9c517\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr" Dec 05 19:30:42 crc kubenswrapper[4828]: I1205 19:30:42.098454 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f959e321-6568-4dd3-8c87-0ebb49d9c517-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr\" (UID: \"f959e321-6568-4dd3-8c87-0ebb49d9c517\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr" Dec 05 19:30:42 crc kubenswrapper[4828]: I1205 19:30:42.114288 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4d9c\" (UniqueName: \"kubernetes.io/projected/f959e321-6568-4dd3-8c87-0ebb49d9c517-kube-api-access-j4d9c\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr\" (UID: \"f959e321-6568-4dd3-8c87-0ebb49d9c517\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr" Dec 05 19:30:42 crc kubenswrapper[4828]: I1205 19:30:42.207655 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr" Dec 05 19:30:43 crc kubenswrapper[4828]: I1205 19:30:43.794479 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr"] Dec 05 19:30:43 crc kubenswrapper[4828]: W1205 19:30:43.802567 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf959e321_6568_4dd3_8c87_0ebb49d9c517.slice/crio-59e68b36a2880bbe55dbb76a8e93e213151869f225bd4a5044f13668bc27c473 WatchSource:0}: Error finding container 59e68b36a2880bbe55dbb76a8e93e213151869f225bd4a5044f13668bc27c473: Status 404 returned error can't find the container with id 59e68b36a2880bbe55dbb76a8e93e213151869f225bd4a5044f13668bc27c473 Dec 05 19:30:43 crc kubenswrapper[4828]: I1205 19:30:43.805684 4828 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 19:30:43 crc kubenswrapper[4828]: I1205 19:30:43.832342 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr" event={"ID":"f959e321-6568-4dd3-8c87-0ebb49d9c517","Type":"ContainerStarted","Data":"59e68b36a2880bbe55dbb76a8e93e213151869f225bd4a5044f13668bc27c473"} Dec 05 19:30:44 crc kubenswrapper[4828]: I1205 19:30:44.283125 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 19:30:44 crc kubenswrapper[4828]: I1205 19:30:44.843588 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr" event={"ID":"f959e321-6568-4dd3-8c87-0ebb49d9c517","Type":"ContainerStarted","Data":"87bb6c27da1e3373eb6d1654b4e8507a02d5e8bb5d33b28d07ace3b1d669184c"} Dec 05 19:30:44 crc kubenswrapper[4828]: I1205 19:30:44.872483 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr" podStartSLOduration=3.398163345 podStartE2EDuration="3.872464586s" podCreationTimestamp="2025-12-05 19:30:41 +0000 UTC" firstStartedPulling="2025-12-05 19:30:43.805470507 +0000 UTC m=+1621.700692813" lastFinishedPulling="2025-12-05 19:30:44.279771738 +0000 UTC m=+1622.174994054" observedRunningTime="2025-12-05 19:30:44.866158365 +0000 UTC m=+1622.761380741" watchObservedRunningTime="2025-12-05 19:30:44.872464586 +0000 UTC m=+1622.767686892" Dec 05 19:30:52 crc kubenswrapper[4828]: I1205 19:30:52.455972 4828 scope.go:117] "RemoveContainer" containerID="44273427ab956efcfe69105cf20e92501e73b81a7dc35341e1cfc9b1dd7be2f7" Dec 05 19:30:52 crc kubenswrapper[4828]: E1205 19:30:52.456842 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:31:07 crc kubenswrapper[4828]: I1205 19:31:07.446989 4828 scope.go:117] "RemoveContainer" containerID="44273427ab956efcfe69105cf20e92501e73b81a7dc35341e1cfc9b1dd7be2f7" Dec 05 19:31:07 crc kubenswrapper[4828]: E1205 19:31:07.447743 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:31:21 crc kubenswrapper[4828]: I1205 19:31:21.447311 4828 scope.go:117] "RemoveContainer" containerID="44273427ab956efcfe69105cf20e92501e73b81a7dc35341e1cfc9b1dd7be2f7" Dec 05 19:31:21 crc kubenswrapper[4828]: E1205 19:31:21.448196 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:31:28 crc kubenswrapper[4828]: I1205 19:31:28.426305 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9mbxg"] Dec 05 19:31:28 crc kubenswrapper[4828]: I1205 19:31:28.429382 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9mbxg" Dec 05 19:31:28 crc kubenswrapper[4828]: I1205 19:31:28.439206 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9mbxg"] Dec 05 19:31:28 crc kubenswrapper[4828]: I1205 19:31:28.564957 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7717893-ea89-463b-a863-42c3cd0c57cb-catalog-content\") pod \"redhat-marketplace-9mbxg\" (UID: \"a7717893-ea89-463b-a863-42c3cd0c57cb\") " pod="openshift-marketplace/redhat-marketplace-9mbxg" Dec 05 19:31:28 crc kubenswrapper[4828]: I1205 19:31:28.565049 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7717893-ea89-463b-a863-42c3cd0c57cb-utilities\") pod \"redhat-marketplace-9mbxg\" (UID: \"a7717893-ea89-463b-a863-42c3cd0c57cb\") " pod="openshift-marketplace/redhat-marketplace-9mbxg" Dec 05 19:31:28 crc kubenswrapper[4828]: I1205 19:31:28.565079 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bst4r\" (UniqueName: \"kubernetes.io/projected/a7717893-ea89-463b-a863-42c3cd0c57cb-kube-api-access-bst4r\") pod \"redhat-marketplace-9mbxg\" (UID: \"a7717893-ea89-463b-a863-42c3cd0c57cb\") " pod="openshift-marketplace/redhat-marketplace-9mbxg" Dec 05 19:31:28 crc kubenswrapper[4828]: I1205 19:31:28.667093 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7717893-ea89-463b-a863-42c3cd0c57cb-utilities\") pod \"redhat-marketplace-9mbxg\" (UID: \"a7717893-ea89-463b-a863-42c3cd0c57cb\") " pod="openshift-marketplace/redhat-marketplace-9mbxg" Dec 05 19:31:28 crc kubenswrapper[4828]: I1205 19:31:28.667195 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bst4r\" (UniqueName: \"kubernetes.io/projected/a7717893-ea89-463b-a863-42c3cd0c57cb-kube-api-access-bst4r\") pod \"redhat-marketplace-9mbxg\" (UID: \"a7717893-ea89-463b-a863-42c3cd0c57cb\") " pod="openshift-marketplace/redhat-marketplace-9mbxg" Dec 05 19:31:28 crc kubenswrapper[4828]: I1205 19:31:28.667527 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7717893-ea89-463b-a863-42c3cd0c57cb-catalog-content\") pod \"redhat-marketplace-9mbxg\" (UID: \"a7717893-ea89-463b-a863-42c3cd0c57cb\") " pod="openshift-marketplace/redhat-marketplace-9mbxg" Dec 05 19:31:28 crc kubenswrapper[4828]: I1205 19:31:28.667712 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7717893-ea89-463b-a863-42c3cd0c57cb-utilities\") pod \"redhat-marketplace-9mbxg\" (UID: \"a7717893-ea89-463b-a863-42c3cd0c57cb\") " pod="openshift-marketplace/redhat-marketplace-9mbxg" Dec 05 19:31:28 crc kubenswrapper[4828]: I1205 19:31:28.667919 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7717893-ea89-463b-a863-42c3cd0c57cb-catalog-content\") pod \"redhat-marketplace-9mbxg\" (UID: \"a7717893-ea89-463b-a863-42c3cd0c57cb\") " pod="openshift-marketplace/redhat-marketplace-9mbxg" Dec 05 19:31:28 crc kubenswrapper[4828]: I1205 19:31:28.691569 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bst4r\" (UniqueName: \"kubernetes.io/projected/a7717893-ea89-463b-a863-42c3cd0c57cb-kube-api-access-bst4r\") pod \"redhat-marketplace-9mbxg\" (UID: \"a7717893-ea89-463b-a863-42c3cd0c57cb\") " pod="openshift-marketplace/redhat-marketplace-9mbxg" Dec 05 19:31:28 crc kubenswrapper[4828]: I1205 19:31:28.754844 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9mbxg" Dec 05 19:31:29 crc kubenswrapper[4828]: I1205 19:31:29.188458 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9mbxg"] Dec 05 19:31:29 crc kubenswrapper[4828]: W1205 19:31:29.200468 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7717893_ea89_463b_a863_42c3cd0c57cb.slice/crio-16390fe8389451ed7c8eee2bd8401f8ab4e74e172d29dbda4e585204ab12daac WatchSource:0}: Error finding container 16390fe8389451ed7c8eee2bd8401f8ab4e74e172d29dbda4e585204ab12daac: Status 404 returned error can't find the container with id 16390fe8389451ed7c8eee2bd8401f8ab4e74e172d29dbda4e585204ab12daac Dec 05 19:31:29 crc kubenswrapper[4828]: I1205 19:31:29.719145 4828 generic.go:334] "Generic (PLEG): container finished" podID="a7717893-ea89-463b-a863-42c3cd0c57cb" containerID="5124ac388f31fcffc2d4c73f10383108092e0357f9fb0494463af9f06d995a3d" exitCode=0 Dec 05 19:31:29 crc kubenswrapper[4828]: I1205 19:31:29.719187 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9mbxg" event={"ID":"a7717893-ea89-463b-a863-42c3cd0c57cb","Type":"ContainerDied","Data":"5124ac388f31fcffc2d4c73f10383108092e0357f9fb0494463af9f06d995a3d"} Dec 05 19:31:29 crc kubenswrapper[4828]: I1205 19:31:29.719213 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9mbxg" event={"ID":"a7717893-ea89-463b-a863-42c3cd0c57cb","Type":"ContainerStarted","Data":"16390fe8389451ed7c8eee2bd8401f8ab4e74e172d29dbda4e585204ab12daac"} Dec 05 19:31:30 crc kubenswrapper[4828]: I1205 19:31:30.733324 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9mbxg" event={"ID":"a7717893-ea89-463b-a863-42c3cd0c57cb","Type":"ContainerStarted","Data":"966ce851352aecd652476ae0626aef705ff5b33866f6be119633e82014f475de"} Dec 05 19:31:31 crc kubenswrapper[4828]: I1205 19:31:31.744396 4828 generic.go:334] "Generic (PLEG): container finished" podID="a7717893-ea89-463b-a863-42c3cd0c57cb" containerID="966ce851352aecd652476ae0626aef705ff5b33866f6be119633e82014f475de" exitCode=0 Dec 05 19:31:31 crc kubenswrapper[4828]: I1205 19:31:31.744453 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9mbxg" event={"ID":"a7717893-ea89-463b-a863-42c3cd0c57cb","Type":"ContainerDied","Data":"966ce851352aecd652476ae0626aef705ff5b33866f6be119633e82014f475de"} Dec 05 19:31:31 crc kubenswrapper[4828]: I1205 19:31:31.744496 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9mbxg" event={"ID":"a7717893-ea89-463b-a863-42c3cd0c57cb","Type":"ContainerStarted","Data":"f9559bdb71ac599170c3871ed30143ae94aad84e3f0562d25cc3127992a2b317"} Dec 05 19:31:31 crc kubenswrapper[4828]: I1205 19:31:31.769199 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9mbxg" podStartSLOduration=2.374528036 podStartE2EDuration="3.769182136s" podCreationTimestamp="2025-12-05 19:31:28 +0000 UTC" firstStartedPulling="2025-12-05 19:31:29.721381753 +0000 UTC m=+1667.616604059" lastFinishedPulling="2025-12-05 19:31:31.116035853 +0000 UTC m=+1669.011258159" observedRunningTime="2025-12-05 19:31:31.765988819 +0000 UTC m=+1669.661211135" watchObservedRunningTime="2025-12-05 19:31:31.769182136 +0000 UTC m=+1669.664404442" Dec 05 19:31:32 crc kubenswrapper[4828]: I1205 19:31:32.763344 4828 generic.go:334] "Generic (PLEG): container finished" podID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" containerID="7d8435f242c38118d7f0cc40add4f792e71bcab239d7df82aa8fd6e2f7e074fd" exitCode=1 Dec 05 19:31:32 crc kubenswrapper[4828]: I1205 19:31:32.763556 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" event={"ID":"03c4fc5d-6be1-47b4-9c39-7bb86046dafd","Type":"ContainerDied","Data":"7d8435f242c38118d7f0cc40add4f792e71bcab239d7df82aa8fd6e2f7e074fd"} Dec 05 19:31:32 crc kubenswrapper[4828]: I1205 19:31:32.763691 4828 scope.go:117] "RemoveContainer" containerID="d000229fa1db508cef366e145d044d5816652c2a9c5bba1cd918b2052aa0438a" Dec 05 19:31:32 crc kubenswrapper[4828]: I1205 19:31:32.764868 4828 scope.go:117] "RemoveContainer" containerID="7d8435f242c38118d7f0cc40add4f792e71bcab239d7df82aa8fd6e2f7e074fd" Dec 05 19:31:32 crc kubenswrapper[4828]: E1205 19:31:32.765482 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:31:33 crc kubenswrapper[4828]: I1205 19:31:33.446891 4828 scope.go:117] "RemoveContainer" containerID="44273427ab956efcfe69105cf20e92501e73b81a7dc35341e1cfc9b1dd7be2f7" Dec 05 19:31:33 crc kubenswrapper[4828]: E1205 19:31:33.447530 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:31:35 crc kubenswrapper[4828]: I1205 19:31:35.118170 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:31:35 crc kubenswrapper[4828]: I1205 19:31:35.118816 4828 scope.go:117] "RemoveContainer" containerID="7d8435f242c38118d7f0cc40add4f792e71bcab239d7df82aa8fd6e2f7e074fd" Dec 05 19:31:35 crc kubenswrapper[4828]: E1205 19:31:35.119156 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:31:38 crc kubenswrapper[4828]: I1205 19:31:38.755851 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9mbxg" Dec 05 19:31:38 crc kubenswrapper[4828]: I1205 19:31:38.756234 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9mbxg" Dec 05 19:31:38 crc kubenswrapper[4828]: I1205 19:31:38.811716 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9mbxg" Dec 05 19:31:38 crc kubenswrapper[4828]: I1205 19:31:38.886270 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9mbxg" Dec 05 19:31:39 crc kubenswrapper[4828]: I1205 19:31:39.048732 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9mbxg"] Dec 05 19:31:40 crc kubenswrapper[4828]: I1205 19:31:40.857135 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9mbxg" podUID="a7717893-ea89-463b-a863-42c3cd0c57cb" containerName="registry-server" containerID="cri-o://f9559bdb71ac599170c3871ed30143ae94aad84e3f0562d25cc3127992a2b317" gracePeriod=2 Dec 05 19:31:41 crc kubenswrapper[4828]: I1205 19:31:41.323981 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9mbxg" Dec 05 19:31:41 crc kubenswrapper[4828]: I1205 19:31:41.415680 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bst4r\" (UniqueName: \"kubernetes.io/projected/a7717893-ea89-463b-a863-42c3cd0c57cb-kube-api-access-bst4r\") pod \"a7717893-ea89-463b-a863-42c3cd0c57cb\" (UID: \"a7717893-ea89-463b-a863-42c3cd0c57cb\") " Dec 05 19:31:41 crc kubenswrapper[4828]: I1205 19:31:41.416152 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7717893-ea89-463b-a863-42c3cd0c57cb-utilities\") pod \"a7717893-ea89-463b-a863-42c3cd0c57cb\" (UID: \"a7717893-ea89-463b-a863-42c3cd0c57cb\") " Dec 05 19:31:41 crc kubenswrapper[4828]: I1205 19:31:41.416376 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7717893-ea89-463b-a863-42c3cd0c57cb-catalog-content\") pod \"a7717893-ea89-463b-a863-42c3cd0c57cb\" (UID: \"a7717893-ea89-463b-a863-42c3cd0c57cb\") " Dec 05 19:31:41 crc kubenswrapper[4828]: I1205 19:31:41.417668 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7717893-ea89-463b-a863-42c3cd0c57cb-utilities" (OuterVolumeSpecName: "utilities") pod "a7717893-ea89-463b-a863-42c3cd0c57cb" (UID: "a7717893-ea89-463b-a863-42c3cd0c57cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:31:41 crc kubenswrapper[4828]: I1205 19:31:41.421756 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7717893-ea89-463b-a863-42c3cd0c57cb-kube-api-access-bst4r" (OuterVolumeSpecName: "kube-api-access-bst4r") pod "a7717893-ea89-463b-a863-42c3cd0c57cb" (UID: "a7717893-ea89-463b-a863-42c3cd0c57cb"). InnerVolumeSpecName "kube-api-access-bst4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:31:41 crc kubenswrapper[4828]: I1205 19:31:41.439741 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7717893-ea89-463b-a863-42c3cd0c57cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7717893-ea89-463b-a863-42c3cd0c57cb" (UID: "a7717893-ea89-463b-a863-42c3cd0c57cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:31:41 crc kubenswrapper[4828]: I1205 19:31:41.518908 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bst4r\" (UniqueName: \"kubernetes.io/projected/a7717893-ea89-463b-a863-42c3cd0c57cb-kube-api-access-bst4r\") on node \"crc\" DevicePath \"\"" Dec 05 19:31:41 crc kubenswrapper[4828]: I1205 19:31:41.518939 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7717893-ea89-463b-a863-42c3cd0c57cb-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 19:31:41 crc kubenswrapper[4828]: I1205 19:31:41.518949 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7717893-ea89-463b-a863-42c3cd0c57cb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 19:31:41 crc kubenswrapper[4828]: I1205 19:31:41.872518 4828 generic.go:334] "Generic (PLEG): container finished" podID="a7717893-ea89-463b-a863-42c3cd0c57cb" containerID="f9559bdb71ac599170c3871ed30143ae94aad84e3f0562d25cc3127992a2b317" exitCode=0 Dec 05 19:31:41 crc kubenswrapper[4828]: I1205 19:31:41.872589 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9mbxg" event={"ID":"a7717893-ea89-463b-a863-42c3cd0c57cb","Type":"ContainerDied","Data":"f9559bdb71ac599170c3871ed30143ae94aad84e3f0562d25cc3127992a2b317"} Dec 05 19:31:41 crc kubenswrapper[4828]: I1205 19:31:41.872625 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9mbxg" Dec 05 19:31:41 crc kubenswrapper[4828]: I1205 19:31:41.872651 4828 scope.go:117] "RemoveContainer" containerID="f9559bdb71ac599170c3871ed30143ae94aad84e3f0562d25cc3127992a2b317" Dec 05 19:31:41 crc kubenswrapper[4828]: I1205 19:31:41.872634 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9mbxg" event={"ID":"a7717893-ea89-463b-a863-42c3cd0c57cb","Type":"ContainerDied","Data":"16390fe8389451ed7c8eee2bd8401f8ab4e74e172d29dbda4e585204ab12daac"} Dec 05 19:31:41 crc kubenswrapper[4828]: I1205 19:31:41.902291 4828 scope.go:117] "RemoveContainer" containerID="966ce851352aecd652476ae0626aef705ff5b33866f6be119633e82014f475de" Dec 05 19:31:41 crc kubenswrapper[4828]: I1205 19:31:41.923253 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9mbxg"] Dec 05 19:31:41 crc kubenswrapper[4828]: I1205 19:31:41.936658 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9mbxg"] Dec 05 19:31:41 crc kubenswrapper[4828]: I1205 19:31:41.948747 4828 scope.go:117] "RemoveContainer" containerID="5124ac388f31fcffc2d4c73f10383108092e0357f9fb0494463af9f06d995a3d" Dec 05 19:31:42 crc kubenswrapper[4828]: I1205 19:31:42.014123 4828 scope.go:117] "RemoveContainer" containerID="f9559bdb71ac599170c3871ed30143ae94aad84e3f0562d25cc3127992a2b317" Dec 05 19:31:42 crc kubenswrapper[4828]: E1205 19:31:42.014689 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9559bdb71ac599170c3871ed30143ae94aad84e3f0562d25cc3127992a2b317\": container with ID starting with f9559bdb71ac599170c3871ed30143ae94aad84e3f0562d25cc3127992a2b317 not found: ID does not exist" containerID="f9559bdb71ac599170c3871ed30143ae94aad84e3f0562d25cc3127992a2b317" Dec 05 19:31:42 crc kubenswrapper[4828]: I1205 19:31:42.014725 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9559bdb71ac599170c3871ed30143ae94aad84e3f0562d25cc3127992a2b317"} err="failed to get container status \"f9559bdb71ac599170c3871ed30143ae94aad84e3f0562d25cc3127992a2b317\": rpc error: code = NotFound desc = could not find container \"f9559bdb71ac599170c3871ed30143ae94aad84e3f0562d25cc3127992a2b317\": container with ID starting with f9559bdb71ac599170c3871ed30143ae94aad84e3f0562d25cc3127992a2b317 not found: ID does not exist" Dec 05 19:31:42 crc kubenswrapper[4828]: I1205 19:31:42.014745 4828 scope.go:117] "RemoveContainer" containerID="966ce851352aecd652476ae0626aef705ff5b33866f6be119633e82014f475de" Dec 05 19:31:42 crc kubenswrapper[4828]: E1205 19:31:42.016293 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"966ce851352aecd652476ae0626aef705ff5b33866f6be119633e82014f475de\": container with ID starting with 966ce851352aecd652476ae0626aef705ff5b33866f6be119633e82014f475de not found: ID does not exist" containerID="966ce851352aecd652476ae0626aef705ff5b33866f6be119633e82014f475de" Dec 05 19:31:42 crc kubenswrapper[4828]: I1205 19:31:42.016347 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"966ce851352aecd652476ae0626aef705ff5b33866f6be119633e82014f475de"} err="failed to get container status \"966ce851352aecd652476ae0626aef705ff5b33866f6be119633e82014f475de\": rpc error: code = NotFound desc = could not find container \"966ce851352aecd652476ae0626aef705ff5b33866f6be119633e82014f475de\": container with ID starting with 966ce851352aecd652476ae0626aef705ff5b33866f6be119633e82014f475de not found: ID does not exist" Dec 05 19:31:42 crc kubenswrapper[4828]: I1205 19:31:42.016436 4828 scope.go:117] "RemoveContainer" containerID="5124ac388f31fcffc2d4c73f10383108092e0357f9fb0494463af9f06d995a3d" Dec 05 19:31:42 crc kubenswrapper[4828]: E1205 19:31:42.016873 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5124ac388f31fcffc2d4c73f10383108092e0357f9fb0494463af9f06d995a3d\": container with ID starting with 5124ac388f31fcffc2d4c73f10383108092e0357f9fb0494463af9f06d995a3d not found: ID does not exist" containerID="5124ac388f31fcffc2d4c73f10383108092e0357f9fb0494463af9f06d995a3d" Dec 05 19:31:42 crc kubenswrapper[4828]: I1205 19:31:42.016911 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5124ac388f31fcffc2d4c73f10383108092e0357f9fb0494463af9f06d995a3d"} err="failed to get container status \"5124ac388f31fcffc2d4c73f10383108092e0357f9fb0494463af9f06d995a3d\": rpc error: code = NotFound desc = could not find container \"5124ac388f31fcffc2d4c73f10383108092e0357f9fb0494463af9f06d995a3d\": container with ID starting with 5124ac388f31fcffc2d4c73f10383108092e0357f9fb0494463af9f06d995a3d not found: ID does not exist" Dec 05 19:31:42 crc kubenswrapper[4828]: I1205 19:31:42.462661 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7717893-ea89-463b-a863-42c3cd0c57cb" path="/var/lib/kubelet/pods/a7717893-ea89-463b-a863-42c3cd0c57cb/volumes" Dec 05 19:31:44 crc kubenswrapper[4828]: I1205 19:31:44.446598 4828 scope.go:117] "RemoveContainer" containerID="44273427ab956efcfe69105cf20e92501e73b81a7dc35341e1cfc9b1dd7be2f7" Dec 05 19:31:44 crc kubenswrapper[4828]: E1205 19:31:44.447227 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:31:45 crc kubenswrapper[4828]: I1205 19:31:45.118350 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:31:45 crc kubenswrapper[4828]: I1205 19:31:45.119246 4828 scope.go:117] "RemoveContainer" containerID="7d8435f242c38118d7f0cc40add4f792e71bcab239d7df82aa8fd6e2f7e074fd" Dec 05 19:31:45 crc kubenswrapper[4828]: E1205 19:31:45.119617 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:31:56 crc kubenswrapper[4828]: I1205 19:31:56.446218 4828 scope.go:117] "RemoveContainer" containerID="7d8435f242c38118d7f0cc40add4f792e71bcab239d7df82aa8fd6e2f7e074fd" Dec 05 19:31:56 crc kubenswrapper[4828]: E1205 19:31:56.447176 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:31:59 crc kubenswrapper[4828]: I1205 19:31:59.447487 4828 scope.go:117] "RemoveContainer" containerID="44273427ab956efcfe69105cf20e92501e73b81a7dc35341e1cfc9b1dd7be2f7" Dec 05 19:31:59 crc kubenswrapper[4828]: E1205 19:31:59.448118 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:32:09 crc kubenswrapper[4828]: I1205 19:32:09.446889 4828 scope.go:117] "RemoveContainer" containerID="7d8435f242c38118d7f0cc40add4f792e71bcab239d7df82aa8fd6e2f7e074fd" Dec 05 19:32:09 crc kubenswrapper[4828]: E1205 19:32:09.447561 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:32:11 crc kubenswrapper[4828]: I1205 19:32:11.447331 4828 scope.go:117] "RemoveContainer" containerID="44273427ab956efcfe69105cf20e92501e73b81a7dc35341e1cfc9b1dd7be2f7" Dec 05 19:32:11 crc kubenswrapper[4828]: E1205 19:32:11.447972 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:32:22 crc kubenswrapper[4828]: I1205 19:32:22.458202 4828 scope.go:117] "RemoveContainer" containerID="44273427ab956efcfe69105cf20e92501e73b81a7dc35341e1cfc9b1dd7be2f7" Dec 05 19:32:22 crc kubenswrapper[4828]: E1205 19:32:22.459075 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:32:23 crc kubenswrapper[4828]: I1205 19:32:23.446274 4828 scope.go:117] "RemoveContainer" containerID="7d8435f242c38118d7f0cc40add4f792e71bcab239d7df82aa8fd6e2f7e074fd" Dec 05 19:32:24 crc kubenswrapper[4828]: I1205 19:32:24.323631 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" event={"ID":"03c4fc5d-6be1-47b4-9c39-7bb86046dafd","Type":"ContainerStarted","Data":"2e10d43a5f9d0e901f1e726fb4c0559d5c9850cabf2a935cf2135d2e228721be"} Dec 05 19:32:24 crc kubenswrapper[4828]: I1205 19:32:24.324374 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:32:35 crc kubenswrapper[4828]: I1205 19:32:35.126003 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:32:36 crc kubenswrapper[4828]: I1205 19:32:36.447424 4828 scope.go:117] "RemoveContainer" containerID="44273427ab956efcfe69105cf20e92501e73b81a7dc35341e1cfc9b1dd7be2f7" Dec 05 19:32:36 crc kubenswrapper[4828]: E1205 19:32:36.447656 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:32:48 crc kubenswrapper[4828]: I1205 19:32:48.446976 4828 scope.go:117] "RemoveContainer" containerID="44273427ab956efcfe69105cf20e92501e73b81a7dc35341e1cfc9b1dd7be2f7" Dec 05 19:32:48 crc kubenswrapper[4828]: E1205 19:32:48.448098 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:32:51 crc kubenswrapper[4828]: I1205 19:32:51.048057 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5837-account-create-update-84sg4"] Dec 05 19:32:51 crc kubenswrapper[4828]: I1205 19:32:51.059964 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-5837-account-create-update-84sg4"] Dec 05 19:32:52 crc kubenswrapper[4828]: I1205 19:32:52.035637 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-mbh7g"] Dec 05 19:32:52 crc kubenswrapper[4828]: I1205 19:32:52.045726 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-3d05-account-create-update-9xpln"] Dec 05 19:32:52 crc kubenswrapper[4828]: I1205 19:32:52.060955 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-f8hmz"] Dec 05 19:32:52 crc kubenswrapper[4828]: I1205 19:32:52.071584 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-f21a-account-create-update-nq5bw"] Dec 05 19:32:52 crc kubenswrapper[4828]: I1205 19:32:52.079558 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-qdhdn"] Dec 05 19:32:52 crc kubenswrapper[4828]: I1205 19:32:52.087394 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-3d05-account-create-update-9xpln"] Dec 05 19:32:52 crc kubenswrapper[4828]: I1205 19:32:52.095971 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-f8hmz"] Dec 05 19:32:52 crc kubenswrapper[4828]: I1205 19:32:52.105062 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-mbh7g"] Dec 05 19:32:52 crc kubenswrapper[4828]: I1205 19:32:52.113479 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-qdhdn"] Dec 05 19:32:52 crc kubenswrapper[4828]: I1205 19:32:52.122062 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-f21a-account-create-update-nq5bw"] Dec 05 19:32:52 crc kubenswrapper[4828]: I1205 19:32:52.466306 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07e307bc-dae3-47b6-8864-0835bcf5844d" path="/var/lib/kubelet/pods/07e307bc-dae3-47b6-8864-0835bcf5844d/volumes" Dec 05 19:32:52 crc kubenswrapper[4828]: I1205 19:32:52.468391 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27482011-da42-44e5-85ba-bd369aefc5b6" path="/var/lib/kubelet/pods/27482011-da42-44e5-85ba-bd369aefc5b6/volumes" Dec 05 19:32:52 crc kubenswrapper[4828]: I1205 19:32:52.469510 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34cf8c2c-f21e-4a27-a777-52b69bc7164b" path="/var/lib/kubelet/pods/34cf8c2c-f21e-4a27-a777-52b69bc7164b/volumes" Dec 05 19:32:52 crc kubenswrapper[4828]: I1205 19:32:52.470538 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fb3621e-b696-4551-a40a-ed30e961d2dc" path="/var/lib/kubelet/pods/5fb3621e-b696-4551-a40a-ed30e961d2dc/volumes" Dec 05 19:32:52 crc kubenswrapper[4828]: I1205 19:32:52.472558 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81c5894e-e6ac-4192-a24a-b7c8375c47e8" path="/var/lib/kubelet/pods/81c5894e-e6ac-4192-a24a-b7c8375c47e8/volumes" Dec 05 19:32:52 crc kubenswrapper[4828]: I1205 19:32:52.473612 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f004af24-6047-4eea-a073-dde452ac983f" path="/var/lib/kubelet/pods/f004af24-6047-4eea-a073-dde452ac983f/volumes" Dec 05 19:33:01 crc kubenswrapper[4828]: I1205 19:33:01.447660 4828 scope.go:117] "RemoveContainer" containerID="44273427ab956efcfe69105cf20e92501e73b81a7dc35341e1cfc9b1dd7be2f7" Dec 05 19:33:01 crc kubenswrapper[4828]: E1205 19:33:01.448329 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:33:12 crc kubenswrapper[4828]: I1205 19:33:12.446530 4828 scope.go:117] "RemoveContainer" containerID="44273427ab956efcfe69105cf20e92501e73b81a7dc35341e1cfc9b1dd7be2f7" Dec 05 19:33:12 crc kubenswrapper[4828]: E1205 19:33:12.447308 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:33:21 crc kubenswrapper[4828]: I1205 19:33:21.037566 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-m8vpv"] Dec 05 19:33:21 crc kubenswrapper[4828]: I1205 19:33:21.045009 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-m8vpv"] Dec 05 19:33:22 crc kubenswrapper[4828]: I1205 19:33:22.460542 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f5d1d7e-96cd-493a-84e4-1a605338a206" path="/var/lib/kubelet/pods/8f5d1d7e-96cd-493a-84e4-1a605338a206/volumes" Dec 05 19:33:27 crc kubenswrapper[4828]: I1205 19:33:27.447253 4828 scope.go:117] "RemoveContainer" containerID="44273427ab956efcfe69105cf20e92501e73b81a7dc35341e1cfc9b1dd7be2f7" Dec 05 19:33:27 crc kubenswrapper[4828]: E1205 19:33:27.448136 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:33:31 crc kubenswrapper[4828]: I1205 19:33:31.676862 4828 scope.go:117] "RemoveContainer" containerID="743cbd5922013d2fabdd12757cdfae6cd701e050c51ef05882687e44a6da8c1c" Dec 05 19:33:31 crc kubenswrapper[4828]: I1205 19:33:31.708151 4828 scope.go:117] "RemoveContainer" containerID="afc1891d2c3a3b9b3b050456cb66b4f37ea89998ae0b50557706423a56fc7b16" Dec 05 19:33:31 crc kubenswrapper[4828]: I1205 19:33:31.751373 4828 scope.go:117] "RemoveContainer" containerID="1fe2de0a4295b113cb59f2d8443d4d2b2e1b205064a22104485c7e07bd11ce6a" Dec 05 19:33:31 crc kubenswrapper[4828]: I1205 19:33:31.771343 4828 scope.go:117] "RemoveContainer" containerID="2349e6ebd4947fb1bf8083fcae7eda8c2df8c769a5dcda7b1816e37ac2655f70" Dec 05 19:33:31 crc kubenswrapper[4828]: I1205 19:33:31.821601 4828 scope.go:117] "RemoveContainer" containerID="f84b11559d84b6f44aabe84c0fce64fa62929d53bb1388d57cae059589ba787f" Dec 05 19:33:31 crc kubenswrapper[4828]: I1205 19:33:31.868248 4828 scope.go:117] "RemoveContainer" containerID="3afdd10b9ae72ea7e1e19301113947ebcb68a96e5b0a8253230375db1ab95c0d" Dec 05 19:33:31 crc kubenswrapper[4828]: I1205 19:33:31.905885 4828 scope.go:117] "RemoveContainer" containerID="c46a0ac6be5918360885a9f6a2919dca06b0185ad56f87ab0c0141d530443453" Dec 05 19:33:31 crc kubenswrapper[4828]: I1205 19:33:31.962667 4828 scope.go:117] "RemoveContainer" containerID="6c7129beceae54ee9d411b9f5f637c07bd22440adfeaceddc507b3e36ed67d2f" Dec 05 19:33:31 crc kubenswrapper[4828]: I1205 19:33:31.986167 4828 scope.go:117] "RemoveContainer" containerID="72dccee6dfe9227300223ae3fcf39a27d3cc60e44c0517d36cae50f91590db18" Dec 05 19:33:32 crc kubenswrapper[4828]: I1205 19:33:32.005093 4828 scope.go:117] "RemoveContainer" containerID="35e579cb56f91cd1eff276f8271f0f95860e777fbd8d75f885d1097a5f71078b" Dec 05 19:33:40 crc kubenswrapper[4828]: I1205 19:33:40.447220 4828 scope.go:117] "RemoveContainer" containerID="44273427ab956efcfe69105cf20e92501e73b81a7dc35341e1cfc9b1dd7be2f7" Dec 05 19:33:40 crc kubenswrapper[4828]: E1205 19:33:40.448189 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:33:50 crc kubenswrapper[4828]: I1205 19:33:50.038333 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-6lkzn"] Dec 05 19:33:50 crc kubenswrapper[4828]: I1205 19:33:50.047620 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-vfkq9"] Dec 05 19:33:50 crc kubenswrapper[4828]: I1205 19:33:50.058029 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-6lkzn"] Dec 05 19:33:50 crc kubenswrapper[4828]: I1205 19:33:50.066787 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-fcdj6"] Dec 05 19:33:50 crc kubenswrapper[4828]: I1205 19:33:50.078442 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-aa5a-account-create-update-44sfn"] Dec 05 19:33:50 crc kubenswrapper[4828]: I1205 19:33:50.089342 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-5613-account-create-update-9xhmx"] Dec 05 19:33:50 crc kubenswrapper[4828]: I1205 19:33:50.097484 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-vfkq9"] Dec 05 19:33:50 crc kubenswrapper[4828]: I1205 19:33:50.106188 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-aa5a-account-create-update-44sfn"] Dec 05 19:33:50 crc kubenswrapper[4828]: I1205 19:33:50.114315 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-fcdj6"] Dec 05 19:33:50 crc kubenswrapper[4828]: I1205 19:33:50.122038 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-5613-account-create-update-9xhmx"] Dec 05 19:33:50 crc kubenswrapper[4828]: I1205 19:33:50.133428 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b57a-account-create-update-xqckx"] Dec 05 19:33:50 crc kubenswrapper[4828]: I1205 19:33:50.144967 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b57a-account-create-update-xqckx"] Dec 05 19:33:50 crc kubenswrapper[4828]: I1205 19:33:50.460402 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="176021e9-3e79-43bb-9c96-54d69defaba1" path="/var/lib/kubelet/pods/176021e9-3e79-43bb-9c96-54d69defaba1/volumes" Dec 05 19:33:50 crc kubenswrapper[4828]: I1205 19:33:50.461193 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26009bb4-7abf-4522-94c2-e63a94f8c7cb" path="/var/lib/kubelet/pods/26009bb4-7abf-4522-94c2-e63a94f8c7cb/volumes" Dec 05 19:33:50 crc kubenswrapper[4828]: I1205 19:33:50.462027 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bba95cd-b21d-4f44-b575-59527cf3b537" path="/var/lib/kubelet/pods/5bba95cd-b21d-4f44-b575-59527cf3b537/volumes" Dec 05 19:33:50 crc kubenswrapper[4828]: I1205 19:33:50.462725 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9628efae-96b7-43fb-a5cc-05279f664d77" path="/var/lib/kubelet/pods/9628efae-96b7-43fb-a5cc-05279f664d77/volumes" Dec 05 19:33:50 crc kubenswrapper[4828]: I1205 19:33:50.463971 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3fd15a0-f362-4f74-bde6-0df71598dcc9" path="/var/lib/kubelet/pods/b3fd15a0-f362-4f74-bde6-0df71598dcc9/volumes" Dec 05 19:33:50 crc kubenswrapper[4828]: I1205 19:33:50.464565 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c341eebe-b27c-4dee-bb8e-477cd913128b" path="/var/lib/kubelet/pods/c341eebe-b27c-4dee-bb8e-477cd913128b/volumes" Dec 05 19:33:54 crc kubenswrapper[4828]: I1205 19:33:54.446584 4828 scope.go:117] "RemoveContainer" containerID="44273427ab956efcfe69105cf20e92501e73b81a7dc35341e1cfc9b1dd7be2f7" Dec 05 19:33:54 crc kubenswrapper[4828]: E1205 19:33:54.447312 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:33:55 crc kubenswrapper[4828]: I1205 19:33:55.028915 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-dks6t"] Dec 05 19:33:55 crc kubenswrapper[4828]: I1205 19:33:55.038769 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-dks6t"] Dec 05 19:33:56 crc kubenswrapper[4828]: I1205 19:33:56.457780 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="879d8ff7-8aac-4c43-999f-064029a8b7cf" path="/var/lib/kubelet/pods/879d8ff7-8aac-4c43-999f-064029a8b7cf/volumes" Dec 05 19:34:04 crc kubenswrapper[4828]: I1205 19:34:04.276531 4828 generic.go:334] "Generic (PLEG): container finished" podID="f959e321-6568-4dd3-8c87-0ebb49d9c517" containerID="87bb6c27da1e3373eb6d1654b4e8507a02d5e8bb5d33b28d07ace3b1d669184c" exitCode=0 Dec 05 19:34:04 crc kubenswrapper[4828]: I1205 19:34:04.277130 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr" event={"ID":"f959e321-6568-4dd3-8c87-0ebb49d9c517","Type":"ContainerDied","Data":"87bb6c27da1e3373eb6d1654b4e8507a02d5e8bb5d33b28d07ace3b1d669184c"} Dec 05 19:34:05 crc kubenswrapper[4828]: I1205 19:34:05.864120 4828 patch_prober.go:28] interesting pod/controller-manager-c5dc4c8b9-ksjsc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 19:34:05 crc kubenswrapper[4828]: I1205 19:34:05.864954 4828 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-c5dc4c8b9-ksjsc" podUID="709f6913-f56e-4ac0-a261-3cab0932ac4e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 19:34:05 crc kubenswrapper[4828]: I1205 19:34:05.864818 4828 patch_prober.go:28] interesting pod/controller-manager-c5dc4c8b9-ksjsc container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 19:34:05 crc kubenswrapper[4828]: I1205 19:34:05.870964 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-c5dc4c8b9-ksjsc" podUID="709f6913-f56e-4ac0-a261-3cab0932ac4e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 19:34:06 crc kubenswrapper[4828]: I1205 19:34:06.393023 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr" Dec 05 19:34:06 crc kubenswrapper[4828]: I1205 19:34:06.466889 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f959e321-6568-4dd3-8c87-0ebb49d9c517-ssh-key\") pod \"f959e321-6568-4dd3-8c87-0ebb49d9c517\" (UID: \"f959e321-6568-4dd3-8c87-0ebb49d9c517\") " Dec 05 19:34:06 crc kubenswrapper[4828]: I1205 19:34:06.466970 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4d9c\" (UniqueName: \"kubernetes.io/projected/f959e321-6568-4dd3-8c87-0ebb49d9c517-kube-api-access-j4d9c\") pod \"f959e321-6568-4dd3-8c87-0ebb49d9c517\" (UID: \"f959e321-6568-4dd3-8c87-0ebb49d9c517\") " Dec 05 19:34:06 crc kubenswrapper[4828]: I1205 19:34:06.467051 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f959e321-6568-4dd3-8c87-0ebb49d9c517-bootstrap-combined-ca-bundle\") pod \"f959e321-6568-4dd3-8c87-0ebb49d9c517\" (UID: \"f959e321-6568-4dd3-8c87-0ebb49d9c517\") " Dec 05 19:34:06 crc kubenswrapper[4828]: I1205 19:34:06.467100 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f959e321-6568-4dd3-8c87-0ebb49d9c517-inventory\") pod \"f959e321-6568-4dd3-8c87-0ebb49d9c517\" (UID: \"f959e321-6568-4dd3-8c87-0ebb49d9c517\") " Dec 05 19:34:06 crc kubenswrapper[4828]: I1205 19:34:06.472609 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f959e321-6568-4dd3-8c87-0ebb49d9c517-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "f959e321-6568-4dd3-8c87-0ebb49d9c517" (UID: "f959e321-6568-4dd3-8c87-0ebb49d9c517"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:34:06 crc kubenswrapper[4828]: I1205 19:34:06.476197 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f959e321-6568-4dd3-8c87-0ebb49d9c517-kube-api-access-j4d9c" (OuterVolumeSpecName: "kube-api-access-j4d9c") pod "f959e321-6568-4dd3-8c87-0ebb49d9c517" (UID: "f959e321-6568-4dd3-8c87-0ebb49d9c517"). InnerVolumeSpecName "kube-api-access-j4d9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:34:06 crc kubenswrapper[4828]: I1205 19:34:06.494714 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f959e321-6568-4dd3-8c87-0ebb49d9c517-inventory" (OuterVolumeSpecName: "inventory") pod "f959e321-6568-4dd3-8c87-0ebb49d9c517" (UID: "f959e321-6568-4dd3-8c87-0ebb49d9c517"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:34:06 crc kubenswrapper[4828]: I1205 19:34:06.516607 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f959e321-6568-4dd3-8c87-0ebb49d9c517-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f959e321-6568-4dd3-8c87-0ebb49d9c517" (UID: "f959e321-6568-4dd3-8c87-0ebb49d9c517"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:34:06 crc kubenswrapper[4828]: I1205 19:34:06.569673 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f959e321-6568-4dd3-8c87-0ebb49d9c517-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 19:34:06 crc kubenswrapper[4828]: I1205 19:34:06.569704 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f959e321-6568-4dd3-8c87-0ebb49d9c517-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 19:34:06 crc kubenswrapper[4828]: I1205 19:34:06.569715 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4d9c\" (UniqueName: \"kubernetes.io/projected/f959e321-6568-4dd3-8c87-0ebb49d9c517-kube-api-access-j4d9c\") on node \"crc\" DevicePath \"\"" Dec 05 19:34:06 crc kubenswrapper[4828]: I1205 19:34:06.569726 4828 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f959e321-6568-4dd3-8c87-0ebb49d9c517-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:34:06 crc kubenswrapper[4828]: I1205 19:34:06.933933 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr" event={"ID":"f959e321-6568-4dd3-8c87-0ebb49d9c517","Type":"ContainerDied","Data":"59e68b36a2880bbe55dbb76a8e93e213151869f225bd4a5044f13668bc27c473"} Dec 05 19:34:06 crc kubenswrapper[4828]: I1205 19:34:06.934297 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59e68b36a2880bbe55dbb76a8e93e213151869f225bd4a5044f13668bc27c473" Dec 05 19:34:06 crc kubenswrapper[4828]: I1205 19:34:06.934016 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr" Dec 05 19:34:07 crc kubenswrapper[4828]: I1205 19:34:07.500261 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dcrds"] Dec 05 19:34:07 crc kubenswrapper[4828]: E1205 19:34:07.500988 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7717893-ea89-463b-a863-42c3cd0c57cb" containerName="extract-utilities" Dec 05 19:34:07 crc kubenswrapper[4828]: I1205 19:34:07.501136 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7717893-ea89-463b-a863-42c3cd0c57cb" containerName="extract-utilities" Dec 05 19:34:07 crc kubenswrapper[4828]: E1205 19:34:07.501247 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7717893-ea89-463b-a863-42c3cd0c57cb" containerName="registry-server" Dec 05 19:34:07 crc kubenswrapper[4828]: I1205 19:34:07.501348 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7717893-ea89-463b-a863-42c3cd0c57cb" containerName="registry-server" Dec 05 19:34:07 crc kubenswrapper[4828]: E1205 19:34:07.501538 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f959e321-6568-4dd3-8c87-0ebb49d9c517" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 05 19:34:07 crc kubenswrapper[4828]: I1205 19:34:07.501617 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f959e321-6568-4dd3-8c87-0ebb49d9c517" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 05 19:34:07 crc kubenswrapper[4828]: E1205 19:34:07.501693 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7717893-ea89-463b-a863-42c3cd0c57cb" containerName="extract-content" Dec 05 19:34:07 crc kubenswrapper[4828]: I1205 19:34:07.501759 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7717893-ea89-463b-a863-42c3cd0c57cb" containerName="extract-content" Dec 05 19:34:07 crc kubenswrapper[4828]: I1205 19:34:07.502090 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7717893-ea89-463b-a863-42c3cd0c57cb" containerName="registry-server" Dec 05 19:34:07 crc kubenswrapper[4828]: I1205 19:34:07.502181 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="f959e321-6568-4dd3-8c87-0ebb49d9c517" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 05 19:34:07 crc kubenswrapper[4828]: I1205 19:34:07.503055 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dcrds" Dec 05 19:34:07 crc kubenswrapper[4828]: I1205 19:34:07.505173 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 19:34:07 crc kubenswrapper[4828]: I1205 19:34:07.505226 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 19:34:07 crc kubenswrapper[4828]: I1205 19:34:07.505621 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 19:34:07 crc kubenswrapper[4828]: I1205 19:34:07.505796 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9rkjj" Dec 05 19:34:07 crc kubenswrapper[4828]: I1205 19:34:07.511752 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dcrds"] Dec 05 19:34:07 crc kubenswrapper[4828]: I1205 19:34:07.590586 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsm6p\" (UniqueName: \"kubernetes.io/projected/04bf9e49-2000-4a46-81a8-3dc1ef7c352f-kube-api-access-tsm6p\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dcrds\" (UID: \"04bf9e49-2000-4a46-81a8-3dc1ef7c352f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dcrds" Dec 05 19:34:07 crc kubenswrapper[4828]: I1205 19:34:07.590678 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04bf9e49-2000-4a46-81a8-3dc1ef7c352f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dcrds\" (UID: \"04bf9e49-2000-4a46-81a8-3dc1ef7c352f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dcrds" Dec 05 19:34:07 crc kubenswrapper[4828]: I1205 19:34:07.590719 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/04bf9e49-2000-4a46-81a8-3dc1ef7c352f-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dcrds\" (UID: \"04bf9e49-2000-4a46-81a8-3dc1ef7c352f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dcrds" Dec 05 19:34:07 crc kubenswrapper[4828]: I1205 19:34:07.692810 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsm6p\" (UniqueName: \"kubernetes.io/projected/04bf9e49-2000-4a46-81a8-3dc1ef7c352f-kube-api-access-tsm6p\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dcrds\" (UID: \"04bf9e49-2000-4a46-81a8-3dc1ef7c352f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dcrds" Dec 05 19:34:07 crc kubenswrapper[4828]: I1205 19:34:07.692880 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04bf9e49-2000-4a46-81a8-3dc1ef7c352f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dcrds\" (UID: \"04bf9e49-2000-4a46-81a8-3dc1ef7c352f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dcrds" Dec 05 19:34:07 crc kubenswrapper[4828]: I1205 19:34:07.692919 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/04bf9e49-2000-4a46-81a8-3dc1ef7c352f-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dcrds\" (UID: \"04bf9e49-2000-4a46-81a8-3dc1ef7c352f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dcrds" Dec 05 19:34:07 crc kubenswrapper[4828]: I1205 19:34:07.698793 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04bf9e49-2000-4a46-81a8-3dc1ef7c352f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dcrds\" (UID: \"04bf9e49-2000-4a46-81a8-3dc1ef7c352f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dcrds" Dec 05 19:34:07 crc kubenswrapper[4828]: I1205 19:34:07.698847 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/04bf9e49-2000-4a46-81a8-3dc1ef7c352f-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dcrds\" (UID: \"04bf9e49-2000-4a46-81a8-3dc1ef7c352f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dcrds" Dec 05 19:34:07 crc kubenswrapper[4828]: I1205 19:34:07.708981 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsm6p\" (UniqueName: \"kubernetes.io/projected/04bf9e49-2000-4a46-81a8-3dc1ef7c352f-kube-api-access-tsm6p\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dcrds\" (UID: \"04bf9e49-2000-4a46-81a8-3dc1ef7c352f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dcrds" Dec 05 19:34:07 crc kubenswrapper[4828]: I1205 19:34:07.823135 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dcrds" Dec 05 19:34:08 crc kubenswrapper[4828]: I1205 19:34:08.408078 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dcrds"] Dec 05 19:34:08 crc kubenswrapper[4828]: I1205 19:34:08.955256 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dcrds" event={"ID":"04bf9e49-2000-4a46-81a8-3dc1ef7c352f","Type":"ContainerStarted","Data":"0ad021245572faa517807ec7ab9e711b1bc032d11f39bf934dc1abb6e9811329"} Dec 05 19:34:09 crc kubenswrapper[4828]: I1205 19:34:09.446250 4828 scope.go:117] "RemoveContainer" containerID="44273427ab956efcfe69105cf20e92501e73b81a7dc35341e1cfc9b1dd7be2f7" Dec 05 19:34:09 crc kubenswrapper[4828]: E1205 19:34:09.446673 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:34:09 crc kubenswrapper[4828]: I1205 19:34:09.966700 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dcrds" event={"ID":"04bf9e49-2000-4a46-81a8-3dc1ef7c352f","Type":"ContainerStarted","Data":"2133d7b22dc0fa8d59b0a3314842144eff618b360e29791f0311784a3c6e907f"} Dec 05 19:34:09 crc kubenswrapper[4828]: I1205 19:34:09.983104 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dcrds" podStartSLOduration=2.471720201 podStartE2EDuration="2.983083845s" podCreationTimestamp="2025-12-05 19:34:07 +0000 UTC" firstStartedPulling="2025-12-05 19:34:08.413229556 +0000 UTC m=+1826.308451862" lastFinishedPulling="2025-12-05 19:34:08.9245932 +0000 UTC m=+1826.819815506" observedRunningTime="2025-12-05 19:34:09.978914572 +0000 UTC m=+1827.874136898" watchObservedRunningTime="2025-12-05 19:34:09.983083845 +0000 UTC m=+1827.878306151" Dec 05 19:34:23 crc kubenswrapper[4828]: I1205 19:34:23.446363 4828 scope.go:117] "RemoveContainer" containerID="44273427ab956efcfe69105cf20e92501e73b81a7dc35341e1cfc9b1dd7be2f7" Dec 05 19:34:23 crc kubenswrapper[4828]: E1205 19:34:23.447487 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:34:32 crc kubenswrapper[4828]: I1205 19:34:32.162801 4828 scope.go:117] "RemoveContainer" containerID="8688d4944e7fa9ff38feedf119ac125c3f1f7c11a07f4f7661a47aaec7030503" Dec 05 19:34:32 crc kubenswrapper[4828]: I1205 19:34:32.198202 4828 scope.go:117] "RemoveContainer" containerID="ac4fba861c6a76224b6bc1adbf0e709bc038d56f599e21f5f2f9543b24d3d152" Dec 05 19:34:32 crc kubenswrapper[4828]: I1205 19:34:32.263367 4828 scope.go:117] "RemoveContainer" containerID="d568684a3d1026528a84d45cee12bc135f48f9c77728b4e2f4d1012ec93e3625" Dec 05 19:34:32 crc kubenswrapper[4828]: I1205 19:34:32.300303 4828 scope.go:117] "RemoveContainer" containerID="07551b75d08158b4fcfd49746b8f3e1fb6b9b91f8dcc4e6be101a999c173796c" Dec 05 19:34:32 crc kubenswrapper[4828]: I1205 19:34:32.366929 4828 scope.go:117] "RemoveContainer" containerID="1c45b0e64d1cdd1b34456146d890d9bf6d450ef8cb9df61191b0ca02bc688b94" Dec 05 19:34:32 crc kubenswrapper[4828]: I1205 19:34:32.393573 4828 scope.go:117] "RemoveContainer" containerID="35dac0a5fe66ec5dfdbc02ed5c5ec416cc28e2942c60f2c59ef6d04aff71ada6" Dec 05 19:34:32 crc kubenswrapper[4828]: I1205 19:34:32.448673 4828 scope.go:117] "RemoveContainer" containerID="43e39dbe794c7e3dbda87b4580e5d78e097bd2d4c5a8b89437e49271b3e16cc8" Dec 05 19:34:36 crc kubenswrapper[4828]: I1205 19:34:36.446353 4828 scope.go:117] "RemoveContainer" containerID="44273427ab956efcfe69105cf20e92501e73b81a7dc35341e1cfc9b1dd7be2f7" Dec 05 19:34:37 crc kubenswrapper[4828]: I1205 19:34:37.055606 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-m6tvf"] Dec 05 19:34:37 crc kubenswrapper[4828]: I1205 19:34:37.074592 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-v6w4g"] Dec 05 19:34:37 crc kubenswrapper[4828]: I1205 19:34:37.086210 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-mdbj4"] Dec 05 19:34:37 crc kubenswrapper[4828]: I1205 19:34:37.098469 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-m6tvf"] Dec 05 19:34:37 crc kubenswrapper[4828]: I1205 19:34:37.108401 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-v6w4g"] Dec 05 19:34:37 crc kubenswrapper[4828]: I1205 19:34:37.116611 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-mdbj4"] Dec 05 19:34:37 crc kubenswrapper[4828]: I1205 19:34:37.279759 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerStarted","Data":"8209c6da3bd70657af20f8ec92896f69eb795638d1fc585ae81fad0dcdd54f0f"} Dec 05 19:34:38 crc kubenswrapper[4828]: I1205 19:34:38.459780 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08bf11d8-3591-4229-be9a-ce8b8e709739" path="/var/lib/kubelet/pods/08bf11d8-3591-4229-be9a-ce8b8e709739/volumes" Dec 05 19:34:38 crc kubenswrapper[4828]: I1205 19:34:38.461473 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="614bd6cb-e60d-4c28-9e7e-132ab3040deb" path="/var/lib/kubelet/pods/614bd6cb-e60d-4c28-9e7e-132ab3040deb/volumes" Dec 05 19:34:38 crc kubenswrapper[4828]: I1205 19:34:38.462383 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcb21ca0-c48a-4c70-bb4f-fe1f240b3101" path="/var/lib/kubelet/pods/bcb21ca0-c48a-4c70-bb4f-fe1f240b3101/volumes" Dec 05 19:34:54 crc kubenswrapper[4828]: I1205 19:34:54.034853 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-9gl2x"] Dec 05 19:34:54 crc kubenswrapper[4828]: I1205 19:34:54.042596 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-9gl2x"] Dec 05 19:34:54 crc kubenswrapper[4828]: I1205 19:34:54.458904 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d22bcf5-bf39-4595-8742-5d8c3018e7bf" path="/var/lib/kubelet/pods/8d22bcf5-bf39-4595-8742-5d8c3018e7bf/volumes" Dec 05 19:34:55 crc kubenswrapper[4828]: I1205 19:34:55.030239 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-7gmm7"] Dec 05 19:34:55 crc kubenswrapper[4828]: I1205 19:34:55.049922 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-7gmm7"] Dec 05 19:34:56 crc kubenswrapper[4828]: I1205 19:34:56.458310 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffc75dac-d7b0-41ce-ac4d-94f251036f95" path="/var/lib/kubelet/pods/ffc75dac-d7b0-41ce-ac4d-94f251036f95/volumes" Dec 05 19:35:01 crc kubenswrapper[4828]: I1205 19:35:01.504079 4828 generic.go:334] "Generic (PLEG): container finished" podID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" containerID="2e10d43a5f9d0e901f1e726fb4c0559d5c9850cabf2a935cf2135d2e228721be" exitCode=1 Dec 05 19:35:01 crc kubenswrapper[4828]: I1205 19:35:01.504156 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" event={"ID":"03c4fc5d-6be1-47b4-9c39-7bb86046dafd","Type":"ContainerDied","Data":"2e10d43a5f9d0e901f1e726fb4c0559d5c9850cabf2a935cf2135d2e228721be"} Dec 05 19:35:01 crc kubenswrapper[4828]: I1205 19:35:01.504506 4828 scope.go:117] "RemoveContainer" containerID="7d8435f242c38118d7f0cc40add4f792e71bcab239d7df82aa8fd6e2f7e074fd" Dec 05 19:35:01 crc kubenswrapper[4828]: I1205 19:35:01.505146 4828 scope.go:117] "RemoveContainer" containerID="2e10d43a5f9d0e901f1e726fb4c0559d5c9850cabf2a935cf2135d2e228721be" Dec 05 19:35:01 crc kubenswrapper[4828]: E1205 19:35:01.505448 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:35:05 crc kubenswrapper[4828]: I1205 19:35:05.117927 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:35:05 crc kubenswrapper[4828]: I1205 19:35:05.119741 4828 scope.go:117] "RemoveContainer" containerID="2e10d43a5f9d0e901f1e726fb4c0559d5c9850cabf2a935cf2135d2e228721be" Dec 05 19:35:05 crc kubenswrapper[4828]: E1205 19:35:05.120280 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:35:05 crc kubenswrapper[4828]: I1205 19:35:05.204479 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:35:05 crc kubenswrapper[4828]: I1205 19:35:05.549393 4828 scope.go:117] "RemoveContainer" containerID="2e10d43a5f9d0e901f1e726fb4c0559d5c9850cabf2a935cf2135d2e228721be" Dec 05 19:35:05 crc kubenswrapper[4828]: E1205 19:35:05.549707 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:35:18 crc kubenswrapper[4828]: I1205 19:35:18.447038 4828 scope.go:117] "RemoveContainer" containerID="2e10d43a5f9d0e901f1e726fb4c0559d5c9850cabf2a935cf2135d2e228721be" Dec 05 19:35:18 crc kubenswrapper[4828]: E1205 19:35:18.447763 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:35:29 crc kubenswrapper[4828]: I1205 19:35:29.447331 4828 scope.go:117] "RemoveContainer" containerID="2e10d43a5f9d0e901f1e726fb4c0559d5c9850cabf2a935cf2135d2e228721be" Dec 05 19:35:29 crc kubenswrapper[4828]: E1205 19:35:29.451574 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:35:32 crc kubenswrapper[4828]: I1205 19:35:32.608032 4828 scope.go:117] "RemoveContainer" containerID="5ee1753ab1252c0568398983c822d650048b764991a2e859706c769c71ba2df1" Dec 05 19:35:32 crc kubenswrapper[4828]: I1205 19:35:32.640948 4828 scope.go:117] "RemoveContainer" containerID="3f2c6eed6861458f9e6b32c41a9d0da2164f1f361c21514c5879a2593e92fe76" Dec 05 19:35:32 crc kubenswrapper[4828]: I1205 19:35:32.730899 4828 scope.go:117] "RemoveContainer" containerID="e4a6aa4accfa85b5a1aef88095642407a93bea0c712eb77b6c4a5305580c32bb" Dec 05 19:35:32 crc kubenswrapper[4828]: I1205 19:35:32.764053 4828 scope.go:117] "RemoveContainer" containerID="4a6ad32a31cc37eca2bf98c701150f8edfb71b9a920b5dc83017a43f65114bfd" Dec 05 19:35:32 crc kubenswrapper[4828]: I1205 19:35:32.815169 4828 scope.go:117] "RemoveContainer" containerID="d57d2dd93f88a88fabcd4e6c356fb068468d4b77d5c7702115c0c843355a345b" Dec 05 19:35:41 crc kubenswrapper[4828]: I1205 19:35:41.044519 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-2e70-account-create-update-xtvw9"] Dec 05 19:35:41 crc kubenswrapper[4828]: I1205 19:35:41.054155 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-f06e-account-create-update-dv9jl"] Dec 05 19:35:41 crc kubenswrapper[4828]: I1205 19:35:41.082436 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-kw6q7"] Dec 05 19:35:41 crc kubenswrapper[4828]: I1205 19:35:41.091942 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-e150-account-create-update-nhw6k"] Dec 05 19:35:41 crc kubenswrapper[4828]: I1205 19:35:41.099934 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-7npt7"] Dec 05 19:35:41 crc kubenswrapper[4828]: I1205 19:35:41.108332 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-hff7r"] Dec 05 19:35:41 crc kubenswrapper[4828]: I1205 19:35:41.117278 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-2e70-account-create-update-xtvw9"] Dec 05 19:35:41 crc kubenswrapper[4828]: I1205 19:35:41.125328 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-f06e-account-create-update-dv9jl"] Dec 05 19:35:41 crc kubenswrapper[4828]: I1205 19:35:41.138404 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-e150-account-create-update-nhw6k"] Dec 05 19:35:41 crc kubenswrapper[4828]: I1205 19:35:41.146944 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-kw6q7"] Dec 05 19:35:41 crc kubenswrapper[4828]: I1205 19:35:41.153856 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-7npt7"] Dec 05 19:35:41 crc kubenswrapper[4828]: I1205 19:35:41.160615 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-hff7r"] Dec 05 19:35:42 crc kubenswrapper[4828]: I1205 19:35:42.452568 4828 scope.go:117] "RemoveContainer" containerID="2e10d43a5f9d0e901f1e726fb4c0559d5c9850cabf2a935cf2135d2e228721be" Dec 05 19:35:42 crc kubenswrapper[4828]: E1205 19:35:42.453278 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:35:42 crc kubenswrapper[4828]: I1205 19:35:42.458782 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0272a767-af95-4be9-a3ca-bcddcf1c9938" path="/var/lib/kubelet/pods/0272a767-af95-4be9-a3ca-bcddcf1c9938/volumes" Dec 05 19:35:42 crc kubenswrapper[4828]: I1205 19:35:42.459619 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18760301-df61-407a-9aa4-0149bcf012dc" path="/var/lib/kubelet/pods/18760301-df61-407a-9aa4-0149bcf012dc/volumes" Dec 05 19:35:42 crc kubenswrapper[4828]: I1205 19:35:42.460216 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eb1a69c-f75a-4899-85d6-504b6a4e7847" path="/var/lib/kubelet/pods/8eb1a69c-f75a-4899-85d6-504b6a4e7847/volumes" Dec 05 19:35:42 crc kubenswrapper[4828]: I1205 19:35:42.460804 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b10629d6-8c01-43e1-bd8c-67a3f4e2d678" path="/var/lib/kubelet/pods/b10629d6-8c01-43e1-bd8c-67a3f4e2d678/volumes" Dec 05 19:35:42 crc kubenswrapper[4828]: I1205 19:35:42.461920 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2814910-93ec-4703-ba42-d50f50b7be57" path="/var/lib/kubelet/pods/b2814910-93ec-4703-ba42-d50f50b7be57/volumes" Dec 05 19:35:42 crc kubenswrapper[4828]: I1205 19:35:42.462492 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d53c991d-ae7a-4770-a082-a09b5a4340ad" path="/var/lib/kubelet/pods/d53c991d-ae7a-4770-a082-a09b5a4340ad/volumes" Dec 05 19:35:56 crc kubenswrapper[4828]: I1205 19:35:56.446539 4828 scope.go:117] "RemoveContainer" containerID="2e10d43a5f9d0e901f1e726fb4c0559d5c9850cabf2a935cf2135d2e228721be" Dec 05 19:35:56 crc kubenswrapper[4828]: E1205 19:35:56.447527 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:36:00 crc kubenswrapper[4828]: I1205 19:36:00.098271 4828 generic.go:334] "Generic (PLEG): container finished" podID="04bf9e49-2000-4a46-81a8-3dc1ef7c352f" containerID="2133d7b22dc0fa8d59b0a3314842144eff618b360e29791f0311784a3c6e907f" exitCode=0 Dec 05 19:36:00 crc kubenswrapper[4828]: I1205 19:36:00.098338 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dcrds" event={"ID":"04bf9e49-2000-4a46-81a8-3dc1ef7c352f","Type":"ContainerDied","Data":"2133d7b22dc0fa8d59b0a3314842144eff618b360e29791f0311784a3c6e907f"} Dec 05 19:36:01 crc kubenswrapper[4828]: I1205 19:36:01.539999 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dcrds" Dec 05 19:36:01 crc kubenswrapper[4828]: I1205 19:36:01.587272 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04bf9e49-2000-4a46-81a8-3dc1ef7c352f-inventory\") pod \"04bf9e49-2000-4a46-81a8-3dc1ef7c352f\" (UID: \"04bf9e49-2000-4a46-81a8-3dc1ef7c352f\") " Dec 05 19:36:01 crc kubenswrapper[4828]: I1205 19:36:01.587341 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsm6p\" (UniqueName: \"kubernetes.io/projected/04bf9e49-2000-4a46-81a8-3dc1ef7c352f-kube-api-access-tsm6p\") pod \"04bf9e49-2000-4a46-81a8-3dc1ef7c352f\" (UID: \"04bf9e49-2000-4a46-81a8-3dc1ef7c352f\") " Dec 05 19:36:01 crc kubenswrapper[4828]: I1205 19:36:01.587407 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/04bf9e49-2000-4a46-81a8-3dc1ef7c352f-ssh-key\") pod \"04bf9e49-2000-4a46-81a8-3dc1ef7c352f\" (UID: \"04bf9e49-2000-4a46-81a8-3dc1ef7c352f\") " Dec 05 19:36:01 crc kubenswrapper[4828]: I1205 19:36:01.592643 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04bf9e49-2000-4a46-81a8-3dc1ef7c352f-kube-api-access-tsm6p" (OuterVolumeSpecName: "kube-api-access-tsm6p") pod "04bf9e49-2000-4a46-81a8-3dc1ef7c352f" (UID: "04bf9e49-2000-4a46-81a8-3dc1ef7c352f"). InnerVolumeSpecName "kube-api-access-tsm6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:36:01 crc kubenswrapper[4828]: I1205 19:36:01.614992 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04bf9e49-2000-4a46-81a8-3dc1ef7c352f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "04bf9e49-2000-4a46-81a8-3dc1ef7c352f" (UID: "04bf9e49-2000-4a46-81a8-3dc1ef7c352f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:36:01 crc kubenswrapper[4828]: I1205 19:36:01.615217 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04bf9e49-2000-4a46-81a8-3dc1ef7c352f-inventory" (OuterVolumeSpecName: "inventory") pod "04bf9e49-2000-4a46-81a8-3dc1ef7c352f" (UID: "04bf9e49-2000-4a46-81a8-3dc1ef7c352f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:36:01 crc kubenswrapper[4828]: I1205 19:36:01.689778 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/04bf9e49-2000-4a46-81a8-3dc1ef7c352f-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 19:36:01 crc kubenswrapper[4828]: I1205 19:36:01.689815 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04bf9e49-2000-4a46-81a8-3dc1ef7c352f-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 19:36:01 crc kubenswrapper[4828]: I1205 19:36:01.689837 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsm6p\" (UniqueName: \"kubernetes.io/projected/04bf9e49-2000-4a46-81a8-3dc1ef7c352f-kube-api-access-tsm6p\") on node \"crc\" DevicePath \"\"" Dec 05 19:36:02 crc kubenswrapper[4828]: I1205 19:36:02.131784 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dcrds" event={"ID":"04bf9e49-2000-4a46-81a8-3dc1ef7c352f","Type":"ContainerDied","Data":"0ad021245572faa517807ec7ab9e711b1bc032d11f39bf934dc1abb6e9811329"} Dec 05 19:36:02 crc kubenswrapper[4828]: I1205 19:36:02.133008 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ad021245572faa517807ec7ab9e711b1bc032d11f39bf934dc1abb6e9811329" Dec 05 19:36:02 crc kubenswrapper[4828]: I1205 19:36:02.132018 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dcrds" Dec 05 19:36:02 crc kubenswrapper[4828]: I1205 19:36:02.232222 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz"] Dec 05 19:36:02 crc kubenswrapper[4828]: E1205 19:36:02.234628 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04bf9e49-2000-4a46-81a8-3dc1ef7c352f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 05 19:36:02 crc kubenswrapper[4828]: I1205 19:36:02.234806 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="04bf9e49-2000-4a46-81a8-3dc1ef7c352f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 05 19:36:02 crc kubenswrapper[4828]: I1205 19:36:02.235190 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="04bf9e49-2000-4a46-81a8-3dc1ef7c352f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 05 19:36:02 crc kubenswrapper[4828]: I1205 19:36:02.236239 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz" Dec 05 19:36:02 crc kubenswrapper[4828]: I1205 19:36:02.239365 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 19:36:02 crc kubenswrapper[4828]: I1205 19:36:02.239507 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 19:36:02 crc kubenswrapper[4828]: I1205 19:36:02.239380 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 19:36:02 crc kubenswrapper[4828]: I1205 19:36:02.241042 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9rkjj" Dec 05 19:36:02 crc kubenswrapper[4828]: I1205 19:36:02.248958 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz"] Dec 05 19:36:02 crc kubenswrapper[4828]: I1205 19:36:02.302021 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnprl\" (UniqueName: \"kubernetes.io/projected/814c8a59-108d-4ee6-943c-2f4e11294f14-kube-api-access-xnprl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz\" (UID: \"814c8a59-108d-4ee6-943c-2f4e11294f14\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz" Dec 05 19:36:02 crc kubenswrapper[4828]: I1205 19:36:02.302216 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/814c8a59-108d-4ee6-943c-2f4e11294f14-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz\" (UID: \"814c8a59-108d-4ee6-943c-2f4e11294f14\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz" Dec 05 19:36:02 crc kubenswrapper[4828]: I1205 19:36:02.302376 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/814c8a59-108d-4ee6-943c-2f4e11294f14-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz\" (UID: \"814c8a59-108d-4ee6-943c-2f4e11294f14\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz" Dec 05 19:36:02 crc kubenswrapper[4828]: I1205 19:36:02.404130 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/814c8a59-108d-4ee6-943c-2f4e11294f14-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz\" (UID: \"814c8a59-108d-4ee6-943c-2f4e11294f14\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz" Dec 05 19:36:02 crc kubenswrapper[4828]: I1205 19:36:02.404204 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/814c8a59-108d-4ee6-943c-2f4e11294f14-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz\" (UID: \"814c8a59-108d-4ee6-943c-2f4e11294f14\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz" Dec 05 19:36:02 crc kubenswrapper[4828]: I1205 19:36:02.404278 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnprl\" (UniqueName: \"kubernetes.io/projected/814c8a59-108d-4ee6-943c-2f4e11294f14-kube-api-access-xnprl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz\" (UID: \"814c8a59-108d-4ee6-943c-2f4e11294f14\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz" Dec 05 19:36:02 crc kubenswrapper[4828]: I1205 19:36:02.409558 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/814c8a59-108d-4ee6-943c-2f4e11294f14-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz\" (UID: \"814c8a59-108d-4ee6-943c-2f4e11294f14\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz" Dec 05 19:36:02 crc kubenswrapper[4828]: I1205 19:36:02.409605 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/814c8a59-108d-4ee6-943c-2f4e11294f14-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz\" (UID: \"814c8a59-108d-4ee6-943c-2f4e11294f14\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz" Dec 05 19:36:02 crc kubenswrapper[4828]: I1205 19:36:02.424060 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnprl\" (UniqueName: \"kubernetes.io/projected/814c8a59-108d-4ee6-943c-2f4e11294f14-kube-api-access-xnprl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz\" (UID: \"814c8a59-108d-4ee6-943c-2f4e11294f14\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz" Dec 05 19:36:02 crc kubenswrapper[4828]: I1205 19:36:02.570320 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz" Dec 05 19:36:03 crc kubenswrapper[4828]: I1205 19:36:03.067106 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz"] Dec 05 19:36:03 crc kubenswrapper[4828]: I1205 19:36:03.082080 4828 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 19:36:03 crc kubenswrapper[4828]: I1205 19:36:03.143919 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz" event={"ID":"814c8a59-108d-4ee6-943c-2f4e11294f14","Type":"ContainerStarted","Data":"23a89d7fe101d47d8e033d50a483c8992a8201702dce68c651cf59110d887f06"} Dec 05 19:36:06 crc kubenswrapper[4828]: I1205 19:36:06.656340 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz" event={"ID":"814c8a59-108d-4ee6-943c-2f4e11294f14","Type":"ContainerStarted","Data":"3ac10590e1a94d543646afd16924aff62f60e303fd72eef4cdce89714c77b639"} Dec 05 19:36:06 crc kubenswrapper[4828]: I1205 19:36:06.676597 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz" podStartSLOduration=2.089342688 podStartE2EDuration="4.676576806s" podCreationTimestamp="2025-12-05 19:36:02 +0000 UTC" firstStartedPulling="2025-12-05 19:36:03.081683384 +0000 UTC m=+1940.976905700" lastFinishedPulling="2025-12-05 19:36:05.668917512 +0000 UTC m=+1943.564139818" observedRunningTime="2025-12-05 19:36:06.673136373 +0000 UTC m=+1944.568358689" watchObservedRunningTime="2025-12-05 19:36:06.676576806 +0000 UTC m=+1944.571799122" Dec 05 19:36:11 crc kubenswrapper[4828]: I1205 19:36:11.446100 4828 scope.go:117] "RemoveContainer" containerID="2e10d43a5f9d0e901f1e726fb4c0559d5c9850cabf2a935cf2135d2e228721be" Dec 05 19:36:11 crc kubenswrapper[4828]: E1205 19:36:11.446940 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:36:25 crc kubenswrapper[4828]: I1205 19:36:25.447179 4828 scope.go:117] "RemoveContainer" containerID="2e10d43a5f9d0e901f1e726fb4c0559d5c9850cabf2a935cf2135d2e228721be" Dec 05 19:36:25 crc kubenswrapper[4828]: I1205 19:36:25.833743 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" event={"ID":"03c4fc5d-6be1-47b4-9c39-7bb86046dafd","Type":"ContainerStarted","Data":"38e4dac2d6a881889bb348b79d1e1f1c0a83324dc2bf7d6fb1d1128a0cd7ea6d"} Dec 05 19:36:25 crc kubenswrapper[4828]: I1205 19:36:25.834344 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:36:32 crc kubenswrapper[4828]: I1205 19:36:32.959133 4828 scope.go:117] "RemoveContainer" containerID="bb586eed2a6db99ea3888ee9b1fb191dcf3ecbd44849d2f204f7e36f47b7504e" Dec 05 19:36:33 crc kubenswrapper[4828]: I1205 19:36:33.001450 4828 scope.go:117] "RemoveContainer" containerID="5635895057f29e6b3afc0ee8d8adec2359c7ecaade6a458f4fff395d74ede90b" Dec 05 19:36:33 crc kubenswrapper[4828]: I1205 19:36:33.051444 4828 scope.go:117] "RemoveContainer" containerID="758309287f3d3ad1459ffc40562e676f16a0b20bc4de9067bc7f86aede09712f" Dec 05 19:36:33 crc kubenswrapper[4828]: I1205 19:36:33.105724 4828 scope.go:117] "RemoveContainer" containerID="3007293a3624d3be49f2a7c1e3f5ff30a48aadd18374b7511c2c0ec4f19956f9" Dec 05 19:36:33 crc kubenswrapper[4828]: I1205 19:36:33.158808 4828 scope.go:117] "RemoveContainer" containerID="d62e2b2501ae290f91677a48b482370db05c514733663928dc9d283ced257279" Dec 05 19:36:33 crc kubenswrapper[4828]: I1205 19:36:33.221472 4828 scope.go:117] "RemoveContainer" containerID="13be22f4321abc31f124f199379e582241a5997cc814697e5eb5d79ad4e831a2" Dec 05 19:36:35 crc kubenswrapper[4828]: I1205 19:36:35.125772 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:36:40 crc kubenswrapper[4828]: I1205 19:36:40.057965 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hlzjj"] Dec 05 19:36:40 crc kubenswrapper[4828]: I1205 19:36:40.069202 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hlzjj"] Dec 05 19:36:40 crc kubenswrapper[4828]: I1205 19:36:40.457385 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a67a695c-a0d3-46ac-ad8f-b90c17732e01" path="/var/lib/kubelet/pods/a67a695c-a0d3-46ac-ad8f-b90c17732e01/volumes" Dec 05 19:37:05 crc kubenswrapper[4828]: I1205 19:37:05.057454 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-km7zx"] Dec 05 19:37:05 crc kubenswrapper[4828]: I1205 19:37:05.067292 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-km7zx"] Dec 05 19:37:05 crc kubenswrapper[4828]: I1205 19:37:05.260080 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:37:05 crc kubenswrapper[4828]: I1205 19:37:05.260154 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:37:06 crc kubenswrapper[4828]: I1205 19:37:06.456711 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0caade90-0fc6-4fa9-9c58-8251b88cb827" path="/var/lib/kubelet/pods/0caade90-0fc6-4fa9-9c58-8251b88cb827/volumes" Dec 05 19:37:09 crc kubenswrapper[4828]: I1205 19:37:09.034325 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jzjgf"] Dec 05 19:37:09 crc kubenswrapper[4828]: I1205 19:37:09.043978 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jzjgf"] Dec 05 19:37:10 crc kubenswrapper[4828]: I1205 19:37:10.458890 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f832ed19-2e34-439e-bb52-37b2919b810e" path="/var/lib/kubelet/pods/f832ed19-2e34-439e-bb52-37b2919b810e/volumes" Dec 05 19:37:20 crc kubenswrapper[4828]: I1205 19:37:20.363332 4828 generic.go:334] "Generic (PLEG): container finished" podID="814c8a59-108d-4ee6-943c-2f4e11294f14" containerID="3ac10590e1a94d543646afd16924aff62f60e303fd72eef4cdce89714c77b639" exitCode=0 Dec 05 19:37:20 crc kubenswrapper[4828]: I1205 19:37:20.363434 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz" event={"ID":"814c8a59-108d-4ee6-943c-2f4e11294f14","Type":"ContainerDied","Data":"3ac10590e1a94d543646afd16924aff62f60e303fd72eef4cdce89714c77b639"} Dec 05 19:37:21 crc kubenswrapper[4828]: I1205 19:37:21.752506 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz" Dec 05 19:37:21 crc kubenswrapper[4828]: I1205 19:37:21.763113 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/814c8a59-108d-4ee6-943c-2f4e11294f14-inventory\") pod \"814c8a59-108d-4ee6-943c-2f4e11294f14\" (UID: \"814c8a59-108d-4ee6-943c-2f4e11294f14\") " Dec 05 19:37:21 crc kubenswrapper[4828]: I1205 19:37:21.763218 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnprl\" (UniqueName: \"kubernetes.io/projected/814c8a59-108d-4ee6-943c-2f4e11294f14-kube-api-access-xnprl\") pod \"814c8a59-108d-4ee6-943c-2f4e11294f14\" (UID: \"814c8a59-108d-4ee6-943c-2f4e11294f14\") " Dec 05 19:37:21 crc kubenswrapper[4828]: I1205 19:37:21.763285 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/814c8a59-108d-4ee6-943c-2f4e11294f14-ssh-key\") pod \"814c8a59-108d-4ee6-943c-2f4e11294f14\" (UID: \"814c8a59-108d-4ee6-943c-2f4e11294f14\") " Dec 05 19:37:21 crc kubenswrapper[4828]: I1205 19:37:21.771740 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/814c8a59-108d-4ee6-943c-2f4e11294f14-kube-api-access-xnprl" (OuterVolumeSpecName: "kube-api-access-xnprl") pod "814c8a59-108d-4ee6-943c-2f4e11294f14" (UID: "814c8a59-108d-4ee6-943c-2f4e11294f14"). InnerVolumeSpecName "kube-api-access-xnprl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:37:21 crc kubenswrapper[4828]: I1205 19:37:21.800107 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/814c8a59-108d-4ee6-943c-2f4e11294f14-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "814c8a59-108d-4ee6-943c-2f4e11294f14" (UID: "814c8a59-108d-4ee6-943c-2f4e11294f14"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:37:21 crc kubenswrapper[4828]: I1205 19:37:21.802078 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/814c8a59-108d-4ee6-943c-2f4e11294f14-inventory" (OuterVolumeSpecName: "inventory") pod "814c8a59-108d-4ee6-943c-2f4e11294f14" (UID: "814c8a59-108d-4ee6-943c-2f4e11294f14"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:37:21 crc kubenswrapper[4828]: I1205 19:37:21.865014 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/814c8a59-108d-4ee6-943c-2f4e11294f14-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 19:37:21 crc kubenswrapper[4828]: I1205 19:37:21.865056 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnprl\" (UniqueName: \"kubernetes.io/projected/814c8a59-108d-4ee6-943c-2f4e11294f14-kube-api-access-xnprl\") on node \"crc\" DevicePath \"\"" Dec 05 19:37:21 crc kubenswrapper[4828]: I1205 19:37:21.865067 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/814c8a59-108d-4ee6-943c-2f4e11294f14-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 19:37:22 crc kubenswrapper[4828]: I1205 19:37:22.394876 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz" event={"ID":"814c8a59-108d-4ee6-943c-2f4e11294f14","Type":"ContainerDied","Data":"23a89d7fe101d47d8e033d50a483c8992a8201702dce68c651cf59110d887f06"} Dec 05 19:37:22 crc kubenswrapper[4828]: I1205 19:37:22.394940 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23a89d7fe101d47d8e033d50a483c8992a8201702dce68c651cf59110d887f06" Dec 05 19:37:22 crc kubenswrapper[4828]: I1205 19:37:22.395052 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz" Dec 05 19:37:22 crc kubenswrapper[4828]: I1205 19:37:22.484866 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz"] Dec 05 19:37:22 crc kubenswrapper[4828]: E1205 19:37:22.485261 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="814c8a59-108d-4ee6-943c-2f4e11294f14" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 05 19:37:22 crc kubenswrapper[4828]: I1205 19:37:22.485279 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="814c8a59-108d-4ee6-943c-2f4e11294f14" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 05 19:37:22 crc kubenswrapper[4828]: I1205 19:37:22.485454 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="814c8a59-108d-4ee6-943c-2f4e11294f14" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 05 19:37:22 crc kubenswrapper[4828]: I1205 19:37:22.486061 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz" Dec 05 19:37:22 crc kubenswrapper[4828]: I1205 19:37:22.488578 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 19:37:22 crc kubenswrapper[4828]: I1205 19:37:22.488704 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9rkjj" Dec 05 19:37:22 crc kubenswrapper[4828]: I1205 19:37:22.488780 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 19:37:22 crc kubenswrapper[4828]: I1205 19:37:22.489025 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 19:37:22 crc kubenswrapper[4828]: I1205 19:37:22.493301 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz"] Dec 05 19:37:22 crc kubenswrapper[4828]: I1205 19:37:22.579093 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a516aad0-97c7-46b3-b692-660dbd380bff-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz\" (UID: \"a516aad0-97c7-46b3-b692-660dbd380bff\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz" Dec 05 19:37:22 crc kubenswrapper[4828]: I1205 19:37:22.579133 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl5dg\" (UniqueName: \"kubernetes.io/projected/a516aad0-97c7-46b3-b692-660dbd380bff-kube-api-access-jl5dg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz\" (UID: \"a516aad0-97c7-46b3-b692-660dbd380bff\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz" Dec 05 19:37:22 crc kubenswrapper[4828]: I1205 19:37:22.579220 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a516aad0-97c7-46b3-b692-660dbd380bff-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz\" (UID: \"a516aad0-97c7-46b3-b692-660dbd380bff\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz" Dec 05 19:37:22 crc kubenswrapper[4828]: I1205 19:37:22.680656 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a516aad0-97c7-46b3-b692-660dbd380bff-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz\" (UID: \"a516aad0-97c7-46b3-b692-660dbd380bff\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz" Dec 05 19:37:22 crc kubenswrapper[4828]: I1205 19:37:22.680813 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a516aad0-97c7-46b3-b692-660dbd380bff-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz\" (UID: \"a516aad0-97c7-46b3-b692-660dbd380bff\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz" Dec 05 19:37:22 crc kubenswrapper[4828]: I1205 19:37:22.680878 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl5dg\" (UniqueName: \"kubernetes.io/projected/a516aad0-97c7-46b3-b692-660dbd380bff-kube-api-access-jl5dg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz\" (UID: \"a516aad0-97c7-46b3-b692-660dbd380bff\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz" Dec 05 19:37:22 crc kubenswrapper[4828]: I1205 19:37:22.684504 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a516aad0-97c7-46b3-b692-660dbd380bff-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz\" (UID: \"a516aad0-97c7-46b3-b692-660dbd380bff\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz" Dec 05 19:37:22 crc kubenswrapper[4828]: I1205 19:37:22.685389 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a516aad0-97c7-46b3-b692-660dbd380bff-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz\" (UID: \"a516aad0-97c7-46b3-b692-660dbd380bff\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz" Dec 05 19:37:22 crc kubenswrapper[4828]: I1205 19:37:22.698174 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl5dg\" (UniqueName: \"kubernetes.io/projected/a516aad0-97c7-46b3-b692-660dbd380bff-kube-api-access-jl5dg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz\" (UID: \"a516aad0-97c7-46b3-b692-660dbd380bff\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz" Dec 05 19:37:22 crc kubenswrapper[4828]: I1205 19:37:22.810065 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz" Dec 05 19:37:23 crc kubenswrapper[4828]: I1205 19:37:23.328301 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz"] Dec 05 19:37:23 crc kubenswrapper[4828]: I1205 19:37:23.405128 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz" event={"ID":"a516aad0-97c7-46b3-b692-660dbd380bff","Type":"ContainerStarted","Data":"b6f899a3066e5e99da1ff5ee6aae1932a03935bf99c15ed401d14317e749b35f"} Dec 05 19:37:24 crc kubenswrapper[4828]: I1205 19:37:24.415340 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz" event={"ID":"a516aad0-97c7-46b3-b692-660dbd380bff","Type":"ContainerStarted","Data":"fa37312c173a1bfea49305496883f6b281cb6fa6d77b3afbbc80187d7dd42770"} Dec 05 19:37:24 crc kubenswrapper[4828]: I1205 19:37:24.440247 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz" podStartSLOduration=1.9385075980000002 podStartE2EDuration="2.440213002s" podCreationTimestamp="2025-12-05 19:37:22 +0000 UTC" firstStartedPulling="2025-12-05 19:37:23.331602041 +0000 UTC m=+2021.226824347" lastFinishedPulling="2025-12-05 19:37:23.833307435 +0000 UTC m=+2021.728529751" observedRunningTime="2025-12-05 19:37:24.42984638 +0000 UTC m=+2022.325068696" watchObservedRunningTime="2025-12-05 19:37:24.440213002 +0000 UTC m=+2022.335435308" Dec 05 19:37:29 crc kubenswrapper[4828]: I1205 19:37:29.470222 4828 generic.go:334] "Generic (PLEG): container finished" podID="a516aad0-97c7-46b3-b692-660dbd380bff" containerID="fa37312c173a1bfea49305496883f6b281cb6fa6d77b3afbbc80187d7dd42770" exitCode=0 Dec 05 19:37:29 crc kubenswrapper[4828]: I1205 19:37:29.470269 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz" event={"ID":"a516aad0-97c7-46b3-b692-660dbd380bff","Type":"ContainerDied","Data":"fa37312c173a1bfea49305496883f6b281cb6fa6d77b3afbbc80187d7dd42770"} Dec 05 19:37:30 crc kubenswrapper[4828]: I1205 19:37:30.891406 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz" Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.033099 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl5dg\" (UniqueName: \"kubernetes.io/projected/a516aad0-97c7-46b3-b692-660dbd380bff-kube-api-access-jl5dg\") pod \"a516aad0-97c7-46b3-b692-660dbd380bff\" (UID: \"a516aad0-97c7-46b3-b692-660dbd380bff\") " Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.033314 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a516aad0-97c7-46b3-b692-660dbd380bff-inventory\") pod \"a516aad0-97c7-46b3-b692-660dbd380bff\" (UID: \"a516aad0-97c7-46b3-b692-660dbd380bff\") " Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.033439 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a516aad0-97c7-46b3-b692-660dbd380bff-ssh-key\") pod \"a516aad0-97c7-46b3-b692-660dbd380bff\" (UID: \"a516aad0-97c7-46b3-b692-660dbd380bff\") " Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.044654 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a516aad0-97c7-46b3-b692-660dbd380bff-kube-api-access-jl5dg" (OuterVolumeSpecName: "kube-api-access-jl5dg") pod "a516aad0-97c7-46b3-b692-660dbd380bff" (UID: "a516aad0-97c7-46b3-b692-660dbd380bff"). InnerVolumeSpecName "kube-api-access-jl5dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.063138 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a516aad0-97c7-46b3-b692-660dbd380bff-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a516aad0-97c7-46b3-b692-660dbd380bff" (UID: "a516aad0-97c7-46b3-b692-660dbd380bff"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.063458 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a516aad0-97c7-46b3-b692-660dbd380bff-inventory" (OuterVolumeSpecName: "inventory") pod "a516aad0-97c7-46b3-b692-660dbd380bff" (UID: "a516aad0-97c7-46b3-b692-660dbd380bff"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.135351 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a516aad0-97c7-46b3-b692-660dbd380bff-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.135386 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl5dg\" (UniqueName: \"kubernetes.io/projected/a516aad0-97c7-46b3-b692-660dbd380bff-kube-api-access-jl5dg\") on node \"crc\" DevicePath \"\"" Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.135397 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a516aad0-97c7-46b3-b692-660dbd380bff-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.487182 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz" event={"ID":"a516aad0-97c7-46b3-b692-660dbd380bff","Type":"ContainerDied","Data":"b6f899a3066e5e99da1ff5ee6aae1932a03935bf99c15ed401d14317e749b35f"} Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.487214 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6f899a3066e5e99da1ff5ee6aae1932a03935bf99c15ed401d14317e749b35f" Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.487245 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz" Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.572552 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-bctgl"] Dec 05 19:37:31 crc kubenswrapper[4828]: E1205 19:37:31.573286 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a516aad0-97c7-46b3-b692-660dbd380bff" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.573314 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="a516aad0-97c7-46b3-b692-660dbd380bff" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.573577 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="a516aad0-97c7-46b3-b692-660dbd380bff" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.574322 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bctgl" Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.577016 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9rkjj" Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.577166 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.577737 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.579165 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.602054 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-bctgl"] Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.747707 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96658594-f9dc-4bc6-8d77-3db81db8d2fd-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bctgl\" (UID: \"96658594-f9dc-4bc6-8d77-3db81db8d2fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bctgl" Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.747862 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96658594-f9dc-4bc6-8d77-3db81db8d2fd-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bctgl\" (UID: \"96658594-f9dc-4bc6-8d77-3db81db8d2fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bctgl" Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.748121 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dksd\" (UniqueName: \"kubernetes.io/projected/96658594-f9dc-4bc6-8d77-3db81db8d2fd-kube-api-access-6dksd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bctgl\" (UID: \"96658594-f9dc-4bc6-8d77-3db81db8d2fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bctgl" Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.850068 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96658594-f9dc-4bc6-8d77-3db81db8d2fd-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bctgl\" (UID: \"96658594-f9dc-4bc6-8d77-3db81db8d2fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bctgl" Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.850463 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96658594-f9dc-4bc6-8d77-3db81db8d2fd-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bctgl\" (UID: \"96658594-f9dc-4bc6-8d77-3db81db8d2fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bctgl" Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.850705 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dksd\" (UniqueName: \"kubernetes.io/projected/96658594-f9dc-4bc6-8d77-3db81db8d2fd-kube-api-access-6dksd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bctgl\" (UID: \"96658594-f9dc-4bc6-8d77-3db81db8d2fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bctgl" Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.855990 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96658594-f9dc-4bc6-8d77-3db81db8d2fd-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bctgl\" (UID: \"96658594-f9dc-4bc6-8d77-3db81db8d2fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bctgl" Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.856580 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96658594-f9dc-4bc6-8d77-3db81db8d2fd-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bctgl\" (UID: \"96658594-f9dc-4bc6-8d77-3db81db8d2fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bctgl" Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.872298 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dksd\" (UniqueName: \"kubernetes.io/projected/96658594-f9dc-4bc6-8d77-3db81db8d2fd-kube-api-access-6dksd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bctgl\" (UID: \"96658594-f9dc-4bc6-8d77-3db81db8d2fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bctgl" Dec 05 19:37:31 crc kubenswrapper[4828]: I1205 19:37:31.895071 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bctgl" Dec 05 19:37:32 crc kubenswrapper[4828]: I1205 19:37:32.389215 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-bctgl"] Dec 05 19:37:32 crc kubenswrapper[4828]: I1205 19:37:32.495054 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bctgl" event={"ID":"96658594-f9dc-4bc6-8d77-3db81db8d2fd","Type":"ContainerStarted","Data":"bab8f948e432e96a27bd884a0338415e6df45e214da6e5a42f2add35ef222046"} Dec 05 19:37:33 crc kubenswrapper[4828]: I1205 19:37:33.339219 4828 scope.go:117] "RemoveContainer" containerID="86f70020404684ce0a0e17ce7204128d13b3cb7c22ace5acbdf319b82d377102" Dec 05 19:37:33 crc kubenswrapper[4828]: I1205 19:37:33.391155 4828 scope.go:117] "RemoveContainer" containerID="7da84ded894cbe24feecc8768a6b4dde1081df6c215a298f12a6bcc8451f487e" Dec 05 19:37:33 crc kubenswrapper[4828]: I1205 19:37:33.429641 4828 scope.go:117] "RemoveContainer" containerID="ec4fd56f0e0a70af83e61b94fe268eb59a5952f35ad959bf58195926e27a8efa" Dec 05 19:37:33 crc kubenswrapper[4828]: I1205 19:37:33.507956 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bctgl" event={"ID":"96658594-f9dc-4bc6-8d77-3db81db8d2fd","Type":"ContainerStarted","Data":"880b7c83de2e897d9af7260febfd0ddd95c947c8ea50d242808df242c6288b11"} Dec 05 19:37:33 crc kubenswrapper[4828]: I1205 19:37:33.529736 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bctgl" podStartSLOduration=2.109994162 podStartE2EDuration="2.529716686s" podCreationTimestamp="2025-12-05 19:37:31 +0000 UTC" firstStartedPulling="2025-12-05 19:37:32.395141542 +0000 UTC m=+2030.290363858" lastFinishedPulling="2025-12-05 19:37:32.814864076 +0000 UTC m=+2030.710086382" observedRunningTime="2025-12-05 19:37:33.521676359 +0000 UTC m=+2031.416898685" watchObservedRunningTime="2025-12-05 19:37:33.529716686 +0000 UTC m=+2031.424938992" Dec 05 19:37:35 crc kubenswrapper[4828]: I1205 19:37:35.259763 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:37:35 crc kubenswrapper[4828]: I1205 19:37:35.260136 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:37:53 crc kubenswrapper[4828]: I1205 19:37:53.044416 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-5qtt7"] Dec 05 19:37:53 crc kubenswrapper[4828]: I1205 19:37:53.061513 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-5qtt7"] Dec 05 19:37:54 crc kubenswrapper[4828]: I1205 19:37:54.468748 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4931033-0f5f-4d93-b809-45da0865ddfa" path="/var/lib/kubelet/pods/b4931033-0f5f-4d93-b809-45da0865ddfa/volumes" Dec 05 19:38:05 crc kubenswrapper[4828]: I1205 19:38:05.260306 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:38:05 crc kubenswrapper[4828]: I1205 19:38:05.261084 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:38:05 crc kubenswrapper[4828]: I1205 19:38:05.261165 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" Dec 05 19:38:05 crc kubenswrapper[4828]: I1205 19:38:05.262360 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8209c6da3bd70657af20f8ec92896f69eb795638d1fc585ae81fad0dcdd54f0f"} pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 19:38:05 crc kubenswrapper[4828]: I1205 19:38:05.262478 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" containerID="cri-o://8209c6da3bd70657af20f8ec92896f69eb795638d1fc585ae81fad0dcdd54f0f" gracePeriod=600 Dec 05 19:38:05 crc kubenswrapper[4828]: I1205 19:38:05.883350 4828 generic.go:334] "Generic (PLEG): container finished" podID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerID="8209c6da3bd70657af20f8ec92896f69eb795638d1fc585ae81fad0dcdd54f0f" exitCode=0 Dec 05 19:38:05 crc kubenswrapper[4828]: I1205 19:38:05.883540 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerDied","Data":"8209c6da3bd70657af20f8ec92896f69eb795638d1fc585ae81fad0dcdd54f0f"} Dec 05 19:38:05 crc kubenswrapper[4828]: I1205 19:38:05.883702 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerStarted","Data":"678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116"} Dec 05 19:38:05 crc kubenswrapper[4828]: I1205 19:38:05.883734 4828 scope.go:117] "RemoveContainer" containerID="44273427ab956efcfe69105cf20e92501e73b81a7dc35341e1cfc9b1dd7be2f7" Dec 05 19:38:13 crc kubenswrapper[4828]: I1205 19:38:13.956565 4828 generic.go:334] "Generic (PLEG): container finished" podID="96658594-f9dc-4bc6-8d77-3db81db8d2fd" containerID="880b7c83de2e897d9af7260febfd0ddd95c947c8ea50d242808df242c6288b11" exitCode=0 Dec 05 19:38:13 crc kubenswrapper[4828]: I1205 19:38:13.956614 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bctgl" event={"ID":"96658594-f9dc-4bc6-8d77-3db81db8d2fd","Type":"ContainerDied","Data":"880b7c83de2e897d9af7260febfd0ddd95c947c8ea50d242808df242c6288b11"} Dec 05 19:38:15 crc kubenswrapper[4828]: I1205 19:38:15.425087 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bctgl" Dec 05 19:38:15 crc kubenswrapper[4828]: I1205 19:38:15.523764 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dksd\" (UniqueName: \"kubernetes.io/projected/96658594-f9dc-4bc6-8d77-3db81db8d2fd-kube-api-access-6dksd\") pod \"96658594-f9dc-4bc6-8d77-3db81db8d2fd\" (UID: \"96658594-f9dc-4bc6-8d77-3db81db8d2fd\") " Dec 05 19:38:15 crc kubenswrapper[4828]: I1205 19:38:15.523996 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96658594-f9dc-4bc6-8d77-3db81db8d2fd-inventory\") pod \"96658594-f9dc-4bc6-8d77-3db81db8d2fd\" (UID: \"96658594-f9dc-4bc6-8d77-3db81db8d2fd\") " Dec 05 19:38:15 crc kubenswrapper[4828]: I1205 19:38:15.524389 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96658594-f9dc-4bc6-8d77-3db81db8d2fd-ssh-key\") pod \"96658594-f9dc-4bc6-8d77-3db81db8d2fd\" (UID: \"96658594-f9dc-4bc6-8d77-3db81db8d2fd\") " Dec 05 19:38:15 crc kubenswrapper[4828]: I1205 19:38:15.531120 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96658594-f9dc-4bc6-8d77-3db81db8d2fd-kube-api-access-6dksd" (OuterVolumeSpecName: "kube-api-access-6dksd") pod "96658594-f9dc-4bc6-8d77-3db81db8d2fd" (UID: "96658594-f9dc-4bc6-8d77-3db81db8d2fd"). InnerVolumeSpecName "kube-api-access-6dksd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:38:15 crc kubenswrapper[4828]: I1205 19:38:15.549658 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96658594-f9dc-4bc6-8d77-3db81db8d2fd-inventory" (OuterVolumeSpecName: "inventory") pod "96658594-f9dc-4bc6-8d77-3db81db8d2fd" (UID: "96658594-f9dc-4bc6-8d77-3db81db8d2fd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:38:15 crc kubenswrapper[4828]: I1205 19:38:15.553794 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96658594-f9dc-4bc6-8d77-3db81db8d2fd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "96658594-f9dc-4bc6-8d77-3db81db8d2fd" (UID: "96658594-f9dc-4bc6-8d77-3db81db8d2fd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:38:15 crc kubenswrapper[4828]: I1205 19:38:15.626330 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96658594-f9dc-4bc6-8d77-3db81db8d2fd-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 19:38:15 crc kubenswrapper[4828]: I1205 19:38:15.626358 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dksd\" (UniqueName: \"kubernetes.io/projected/96658594-f9dc-4bc6-8d77-3db81db8d2fd-kube-api-access-6dksd\") on node \"crc\" DevicePath \"\"" Dec 05 19:38:15 crc kubenswrapper[4828]: I1205 19:38:15.626369 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96658594-f9dc-4bc6-8d77-3db81db8d2fd-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 19:38:15 crc kubenswrapper[4828]: I1205 19:38:15.981786 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bctgl" event={"ID":"96658594-f9dc-4bc6-8d77-3db81db8d2fd","Type":"ContainerDied","Data":"bab8f948e432e96a27bd884a0338415e6df45e214da6e5a42f2add35ef222046"} Dec 05 19:38:15 crc kubenswrapper[4828]: I1205 19:38:15.981837 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bab8f948e432e96a27bd884a0338415e6df45e214da6e5a42f2add35ef222046" Dec 05 19:38:15 crc kubenswrapper[4828]: I1205 19:38:15.981900 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bctgl" Dec 05 19:38:16 crc kubenswrapper[4828]: I1205 19:38:16.124505 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f"] Dec 05 19:38:16 crc kubenswrapper[4828]: E1205 19:38:16.125385 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96658594-f9dc-4bc6-8d77-3db81db8d2fd" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 05 19:38:16 crc kubenswrapper[4828]: I1205 19:38:16.125418 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="96658594-f9dc-4bc6-8d77-3db81db8d2fd" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 05 19:38:16 crc kubenswrapper[4828]: I1205 19:38:16.125761 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="96658594-f9dc-4bc6-8d77-3db81db8d2fd" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 05 19:38:16 crc kubenswrapper[4828]: I1205 19:38:16.126856 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f" Dec 05 19:38:16 crc kubenswrapper[4828]: I1205 19:38:16.128966 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 19:38:16 crc kubenswrapper[4828]: I1205 19:38:16.129494 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9rkjj" Dec 05 19:38:16 crc kubenswrapper[4828]: I1205 19:38:16.129660 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 19:38:16 crc kubenswrapper[4828]: I1205 19:38:16.132278 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f"] Dec 05 19:38:16 crc kubenswrapper[4828]: I1205 19:38:16.132901 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 19:38:16 crc kubenswrapper[4828]: I1205 19:38:16.240231 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsbss\" (UniqueName: \"kubernetes.io/projected/3f1c3024-3679-435b-9252-3cd35ee43b4b-kube-api-access-qsbss\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f\" (UID: \"3f1c3024-3679-435b-9252-3cd35ee43b4b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f" Dec 05 19:38:16 crc kubenswrapper[4828]: I1205 19:38:16.240356 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3f1c3024-3679-435b-9252-3cd35ee43b4b-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f\" (UID: \"3f1c3024-3679-435b-9252-3cd35ee43b4b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f" Dec 05 19:38:16 crc kubenswrapper[4828]: I1205 19:38:16.240690 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3f1c3024-3679-435b-9252-3cd35ee43b4b-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f\" (UID: \"3f1c3024-3679-435b-9252-3cd35ee43b4b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f" Dec 05 19:38:16 crc kubenswrapper[4828]: I1205 19:38:16.341912 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3f1c3024-3679-435b-9252-3cd35ee43b4b-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f\" (UID: \"3f1c3024-3679-435b-9252-3cd35ee43b4b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f" Dec 05 19:38:16 crc kubenswrapper[4828]: I1205 19:38:16.341963 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsbss\" (UniqueName: \"kubernetes.io/projected/3f1c3024-3679-435b-9252-3cd35ee43b4b-kube-api-access-qsbss\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f\" (UID: \"3f1c3024-3679-435b-9252-3cd35ee43b4b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f" Dec 05 19:38:16 crc kubenswrapper[4828]: I1205 19:38:16.342024 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3f1c3024-3679-435b-9252-3cd35ee43b4b-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f\" (UID: \"3f1c3024-3679-435b-9252-3cd35ee43b4b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f" Dec 05 19:38:16 crc kubenswrapper[4828]: I1205 19:38:16.348441 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3f1c3024-3679-435b-9252-3cd35ee43b4b-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f\" (UID: \"3f1c3024-3679-435b-9252-3cd35ee43b4b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f" Dec 05 19:38:16 crc kubenswrapper[4828]: I1205 19:38:16.348607 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3f1c3024-3679-435b-9252-3cd35ee43b4b-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f\" (UID: \"3f1c3024-3679-435b-9252-3cd35ee43b4b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f" Dec 05 19:38:16 crc kubenswrapper[4828]: I1205 19:38:16.363271 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsbss\" (UniqueName: \"kubernetes.io/projected/3f1c3024-3679-435b-9252-3cd35ee43b4b-kube-api-access-qsbss\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f\" (UID: \"3f1c3024-3679-435b-9252-3cd35ee43b4b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f" Dec 05 19:38:16 crc kubenswrapper[4828]: I1205 19:38:16.452238 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f" Dec 05 19:38:16 crc kubenswrapper[4828]: I1205 19:38:16.971053 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f"] Dec 05 19:38:16 crc kubenswrapper[4828]: W1205 19:38:16.972403 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f1c3024_3679_435b_9252_3cd35ee43b4b.slice/crio-a1c1758de5adab4c28efe84e16e1a8b3d0f7249a90319cf0ac7215c5a194d6f1 WatchSource:0}: Error finding container a1c1758de5adab4c28efe84e16e1a8b3d0f7249a90319cf0ac7215c5a194d6f1: Status 404 returned error can't find the container with id a1c1758de5adab4c28efe84e16e1a8b3d0f7249a90319cf0ac7215c5a194d6f1 Dec 05 19:38:16 crc kubenswrapper[4828]: I1205 19:38:16.990475 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f" event={"ID":"3f1c3024-3679-435b-9252-3cd35ee43b4b","Type":"ContainerStarted","Data":"a1c1758de5adab4c28efe84e16e1a8b3d0f7249a90319cf0ac7215c5a194d6f1"} Dec 05 19:38:18 crc kubenswrapper[4828]: I1205 19:38:18.003397 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f" event={"ID":"3f1c3024-3679-435b-9252-3cd35ee43b4b","Type":"ContainerStarted","Data":"20f55e3a43fe3e61ba307aed88476d137ff2b6cfe095c29b28e0e40058dcfdfb"} Dec 05 19:38:18 crc kubenswrapper[4828]: I1205 19:38:18.049173 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f" podStartSLOduration=1.62074574 podStartE2EDuration="2.049145689s" podCreationTimestamp="2025-12-05 19:38:16 +0000 UTC" firstStartedPulling="2025-12-05 19:38:16.974732432 +0000 UTC m=+2074.869954758" lastFinishedPulling="2025-12-05 19:38:17.403132391 +0000 UTC m=+2075.298354707" observedRunningTime="2025-12-05 19:38:18.024456233 +0000 UTC m=+2075.919678569" watchObservedRunningTime="2025-12-05 19:38:18.049145689 +0000 UTC m=+2075.944368025" Dec 05 19:38:33 crc kubenswrapper[4828]: I1205 19:38:33.565386 4828 scope.go:117] "RemoveContainer" containerID="f0e5f1df9a55681c8f67d807d61cf9713a0f9fa5f9a56a7b9d54425efa189eca" Dec 05 19:38:59 crc kubenswrapper[4828]: I1205 19:38:59.407638 4828 generic.go:334] "Generic (PLEG): container finished" podID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" containerID="38e4dac2d6a881889bb348b79d1e1f1c0a83324dc2bf7d6fb1d1128a0cd7ea6d" exitCode=1 Dec 05 19:38:59 crc kubenswrapper[4828]: I1205 19:38:59.407692 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" event={"ID":"03c4fc5d-6be1-47b4-9c39-7bb86046dafd","Type":"ContainerDied","Data":"38e4dac2d6a881889bb348b79d1e1f1c0a83324dc2bf7d6fb1d1128a0cd7ea6d"} Dec 05 19:38:59 crc kubenswrapper[4828]: I1205 19:38:59.408071 4828 scope.go:117] "RemoveContainer" containerID="2e10d43a5f9d0e901f1e726fb4c0559d5c9850cabf2a935cf2135d2e228721be" Dec 05 19:38:59 crc kubenswrapper[4828]: I1205 19:38:59.412445 4828 scope.go:117] "RemoveContainer" containerID="38e4dac2d6a881889bb348b79d1e1f1c0a83324dc2bf7d6fb1d1128a0cd7ea6d" Dec 05 19:38:59 crc kubenswrapper[4828]: E1205 19:38:59.415353 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:39:05 crc kubenswrapper[4828]: I1205 19:39:05.118456 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:39:05 crc kubenswrapper[4828]: I1205 19:39:05.119138 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:39:05 crc kubenswrapper[4828]: I1205 19:39:05.120129 4828 scope.go:117] "RemoveContainer" containerID="38e4dac2d6a881889bb348b79d1e1f1c0a83324dc2bf7d6fb1d1128a0cd7ea6d" Dec 05 19:39:05 crc kubenswrapper[4828]: E1205 19:39:05.120555 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:39:09 crc kubenswrapper[4828]: I1205 19:39:09.566428 4828 generic.go:334] "Generic (PLEG): container finished" podID="3f1c3024-3679-435b-9252-3cd35ee43b4b" containerID="20f55e3a43fe3e61ba307aed88476d137ff2b6cfe095c29b28e0e40058dcfdfb" exitCode=0 Dec 05 19:39:09 crc kubenswrapper[4828]: I1205 19:39:09.566538 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f" event={"ID":"3f1c3024-3679-435b-9252-3cd35ee43b4b","Type":"ContainerDied","Data":"20f55e3a43fe3e61ba307aed88476d137ff2b6cfe095c29b28e0e40058dcfdfb"} Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.031637 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f" Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.180246 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3f1c3024-3679-435b-9252-3cd35ee43b4b-inventory\") pod \"3f1c3024-3679-435b-9252-3cd35ee43b4b\" (UID: \"3f1c3024-3679-435b-9252-3cd35ee43b4b\") " Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.180700 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3f1c3024-3679-435b-9252-3cd35ee43b4b-ssh-key\") pod \"3f1c3024-3679-435b-9252-3cd35ee43b4b\" (UID: \"3f1c3024-3679-435b-9252-3cd35ee43b4b\") " Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.180866 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsbss\" (UniqueName: \"kubernetes.io/projected/3f1c3024-3679-435b-9252-3cd35ee43b4b-kube-api-access-qsbss\") pod \"3f1c3024-3679-435b-9252-3cd35ee43b4b\" (UID: \"3f1c3024-3679-435b-9252-3cd35ee43b4b\") " Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.193576 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f1c3024-3679-435b-9252-3cd35ee43b4b-kube-api-access-qsbss" (OuterVolumeSpecName: "kube-api-access-qsbss") pod "3f1c3024-3679-435b-9252-3cd35ee43b4b" (UID: "3f1c3024-3679-435b-9252-3cd35ee43b4b"). InnerVolumeSpecName "kube-api-access-qsbss". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.222644 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f1c3024-3679-435b-9252-3cd35ee43b4b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3f1c3024-3679-435b-9252-3cd35ee43b4b" (UID: "3f1c3024-3679-435b-9252-3cd35ee43b4b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.246128 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f1c3024-3679-435b-9252-3cd35ee43b4b-inventory" (OuterVolumeSpecName: "inventory") pod "3f1c3024-3679-435b-9252-3cd35ee43b4b" (UID: "3f1c3024-3679-435b-9252-3cd35ee43b4b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.283401 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3f1c3024-3679-435b-9252-3cd35ee43b4b-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.283437 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3f1c3024-3679-435b-9252-3cd35ee43b4b-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.283450 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsbss\" (UniqueName: \"kubernetes.io/projected/3f1c3024-3679-435b-9252-3cd35ee43b4b-kube-api-access-qsbss\") on node \"crc\" DevicePath \"\"" Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.586225 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f" event={"ID":"3f1c3024-3679-435b-9252-3cd35ee43b4b","Type":"ContainerDied","Data":"a1c1758de5adab4c28efe84e16e1a8b3d0f7249a90319cf0ac7215c5a194d6f1"} Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.586290 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1c1758de5adab4c28efe84e16e1a8b3d0f7249a90319cf0ac7215c5a194d6f1" Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.586303 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f" Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.716220 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-f242c"] Dec 05 19:39:11 crc kubenswrapper[4828]: E1205 19:39:11.716703 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1c3024-3679-435b-9252-3cd35ee43b4b" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.716732 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1c3024-3679-435b-9252-3cd35ee43b4b" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.717313 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1c3024-3679-435b-9252-3cd35ee43b4b" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.718309 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-f242c" Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.723008 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.723093 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.724429 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.724460 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9rkjj" Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.725675 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-f242c"] Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.796754 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c7014005-08da-4204-96f5-163111e61315-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-f242c\" (UID: \"c7014005-08da-4204-96f5-163111e61315\") " pod="openstack/ssh-known-hosts-edpm-deployment-f242c" Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.796877 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7014005-08da-4204-96f5-163111e61315-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-f242c\" (UID: \"c7014005-08da-4204-96f5-163111e61315\") " pod="openstack/ssh-known-hosts-edpm-deployment-f242c" Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.797595 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z8ck\" (UniqueName: \"kubernetes.io/projected/c7014005-08da-4204-96f5-163111e61315-kube-api-access-2z8ck\") pod \"ssh-known-hosts-edpm-deployment-f242c\" (UID: \"c7014005-08da-4204-96f5-163111e61315\") " pod="openstack/ssh-known-hosts-edpm-deployment-f242c" Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.910411 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z8ck\" (UniqueName: \"kubernetes.io/projected/c7014005-08da-4204-96f5-163111e61315-kube-api-access-2z8ck\") pod \"ssh-known-hosts-edpm-deployment-f242c\" (UID: \"c7014005-08da-4204-96f5-163111e61315\") " pod="openstack/ssh-known-hosts-edpm-deployment-f242c" Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.910947 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c7014005-08da-4204-96f5-163111e61315-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-f242c\" (UID: \"c7014005-08da-4204-96f5-163111e61315\") " pod="openstack/ssh-known-hosts-edpm-deployment-f242c" Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.912016 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7014005-08da-4204-96f5-163111e61315-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-f242c\" (UID: \"c7014005-08da-4204-96f5-163111e61315\") " pod="openstack/ssh-known-hosts-edpm-deployment-f242c" Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.921750 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c7014005-08da-4204-96f5-163111e61315-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-f242c\" (UID: \"c7014005-08da-4204-96f5-163111e61315\") " pod="openstack/ssh-known-hosts-edpm-deployment-f242c" Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.921805 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7014005-08da-4204-96f5-163111e61315-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-f242c\" (UID: \"c7014005-08da-4204-96f5-163111e61315\") " pod="openstack/ssh-known-hosts-edpm-deployment-f242c" Dec 05 19:39:11 crc kubenswrapper[4828]: I1205 19:39:11.930252 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z8ck\" (UniqueName: \"kubernetes.io/projected/c7014005-08da-4204-96f5-163111e61315-kube-api-access-2z8ck\") pod \"ssh-known-hosts-edpm-deployment-f242c\" (UID: \"c7014005-08da-4204-96f5-163111e61315\") " pod="openstack/ssh-known-hosts-edpm-deployment-f242c" Dec 05 19:39:12 crc kubenswrapper[4828]: I1205 19:39:12.041439 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-f242c" Dec 05 19:39:12 crc kubenswrapper[4828]: I1205 19:39:12.560323 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-f242c"] Dec 05 19:39:12 crc kubenswrapper[4828]: I1205 19:39:12.595549 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-f242c" event={"ID":"c7014005-08da-4204-96f5-163111e61315","Type":"ContainerStarted","Data":"0fd00d71a35ef2346b4bd8ccb058cf0350185c3b8bc9d40b76b5eb5dc9d51018"} Dec 05 19:39:13 crc kubenswrapper[4828]: I1205 19:39:13.606692 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-f242c" event={"ID":"c7014005-08da-4204-96f5-163111e61315","Type":"ContainerStarted","Data":"94dc259bd42e72ce7c41310f6c0658bc782508ac9e2aba8efdc47f130ab93584"} Dec 05 19:39:13 crc kubenswrapper[4828]: I1205 19:39:13.628769 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-f242c" podStartSLOduration=2.002603462 podStartE2EDuration="2.628748882s" podCreationTimestamp="2025-12-05 19:39:11 +0000 UTC" firstStartedPulling="2025-12-05 19:39:12.566521577 +0000 UTC m=+2130.461743883" lastFinishedPulling="2025-12-05 19:39:13.192666977 +0000 UTC m=+2131.087889303" observedRunningTime="2025-12-05 19:39:13.627993563 +0000 UTC m=+2131.523215869" watchObservedRunningTime="2025-12-05 19:39:13.628748882 +0000 UTC m=+2131.523971208" Dec 05 19:39:17 crc kubenswrapper[4828]: I1205 19:39:17.447115 4828 scope.go:117] "RemoveContainer" containerID="38e4dac2d6a881889bb348b79d1e1f1c0a83324dc2bf7d6fb1d1128a0cd7ea6d" Dec 05 19:39:17 crc kubenswrapper[4828]: E1205 19:39:17.447858 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:39:20 crc kubenswrapper[4828]: I1205 19:39:20.674490 4828 generic.go:334] "Generic (PLEG): container finished" podID="c7014005-08da-4204-96f5-163111e61315" containerID="94dc259bd42e72ce7c41310f6c0658bc782508ac9e2aba8efdc47f130ab93584" exitCode=0 Dec 05 19:39:20 crc kubenswrapper[4828]: I1205 19:39:20.674582 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-f242c" event={"ID":"c7014005-08da-4204-96f5-163111e61315","Type":"ContainerDied","Data":"94dc259bd42e72ce7c41310f6c0658bc782508ac9e2aba8efdc47f130ab93584"} Dec 05 19:39:22 crc kubenswrapper[4828]: I1205 19:39:22.165237 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-f242c" Dec 05 19:39:22 crc kubenswrapper[4828]: I1205 19:39:22.319637 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z8ck\" (UniqueName: \"kubernetes.io/projected/c7014005-08da-4204-96f5-163111e61315-kube-api-access-2z8ck\") pod \"c7014005-08da-4204-96f5-163111e61315\" (UID: \"c7014005-08da-4204-96f5-163111e61315\") " Dec 05 19:39:22 crc kubenswrapper[4828]: I1205 19:39:22.319920 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7014005-08da-4204-96f5-163111e61315-ssh-key-openstack-edpm-ipam\") pod \"c7014005-08da-4204-96f5-163111e61315\" (UID: \"c7014005-08da-4204-96f5-163111e61315\") " Dec 05 19:39:22 crc kubenswrapper[4828]: I1205 19:39:22.319966 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c7014005-08da-4204-96f5-163111e61315-inventory-0\") pod \"c7014005-08da-4204-96f5-163111e61315\" (UID: \"c7014005-08da-4204-96f5-163111e61315\") " Dec 05 19:39:22 crc kubenswrapper[4828]: I1205 19:39:22.326061 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7014005-08da-4204-96f5-163111e61315-kube-api-access-2z8ck" (OuterVolumeSpecName: "kube-api-access-2z8ck") pod "c7014005-08da-4204-96f5-163111e61315" (UID: "c7014005-08da-4204-96f5-163111e61315"). InnerVolumeSpecName "kube-api-access-2z8ck". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:39:22 crc kubenswrapper[4828]: I1205 19:39:22.346642 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7014005-08da-4204-96f5-163111e61315-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c7014005-08da-4204-96f5-163111e61315" (UID: "c7014005-08da-4204-96f5-163111e61315"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:39:22 crc kubenswrapper[4828]: I1205 19:39:22.361228 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7014005-08da-4204-96f5-163111e61315-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "c7014005-08da-4204-96f5-163111e61315" (UID: "c7014005-08da-4204-96f5-163111e61315"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:39:22 crc kubenswrapper[4828]: I1205 19:39:22.421952 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7014005-08da-4204-96f5-163111e61315-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 05 19:39:22 crc kubenswrapper[4828]: I1205 19:39:22.421990 4828 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c7014005-08da-4204-96f5-163111e61315-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 05 19:39:22 crc kubenswrapper[4828]: I1205 19:39:22.421999 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2z8ck\" (UniqueName: \"kubernetes.io/projected/c7014005-08da-4204-96f5-163111e61315-kube-api-access-2z8ck\") on node \"crc\" DevicePath \"\"" Dec 05 19:39:22 crc kubenswrapper[4828]: I1205 19:39:22.693206 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-f242c" event={"ID":"c7014005-08da-4204-96f5-163111e61315","Type":"ContainerDied","Data":"0fd00d71a35ef2346b4bd8ccb058cf0350185c3b8bc9d40b76b5eb5dc9d51018"} Dec 05 19:39:22 crc kubenswrapper[4828]: I1205 19:39:22.693455 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fd00d71a35ef2346b4bd8ccb058cf0350185c3b8bc9d40b76b5eb5dc9d51018" Dec 05 19:39:22 crc kubenswrapper[4828]: I1205 19:39:22.693465 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-f242c" Dec 05 19:39:22 crc kubenswrapper[4828]: I1205 19:39:22.766886 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5lc2"] Dec 05 19:39:22 crc kubenswrapper[4828]: E1205 19:39:22.767597 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7014005-08da-4204-96f5-163111e61315" containerName="ssh-known-hosts-edpm-deployment" Dec 05 19:39:22 crc kubenswrapper[4828]: I1205 19:39:22.767686 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7014005-08da-4204-96f5-163111e61315" containerName="ssh-known-hosts-edpm-deployment" Dec 05 19:39:22 crc kubenswrapper[4828]: I1205 19:39:22.768982 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7014005-08da-4204-96f5-163111e61315" containerName="ssh-known-hosts-edpm-deployment" Dec 05 19:39:22 crc kubenswrapper[4828]: I1205 19:39:22.770005 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5lc2" Dec 05 19:39:22 crc kubenswrapper[4828]: I1205 19:39:22.773137 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 19:39:22 crc kubenswrapper[4828]: I1205 19:39:22.773336 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 19:39:22 crc kubenswrapper[4828]: I1205 19:39:22.773430 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9rkjj" Dec 05 19:39:22 crc kubenswrapper[4828]: I1205 19:39:22.773589 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 19:39:22 crc kubenswrapper[4828]: I1205 19:39:22.787487 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5lc2"] Dec 05 19:39:22 crc kubenswrapper[4828]: I1205 19:39:22.930200 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7nb9\" (UniqueName: \"kubernetes.io/projected/9b0ec9c6-c67f-45f2-be21-251c97a44a7e-kube-api-access-p7nb9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n5lc2\" (UID: \"9b0ec9c6-c67f-45f2-be21-251c97a44a7e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5lc2" Dec 05 19:39:22 crc kubenswrapper[4828]: I1205 19:39:22.930254 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9b0ec9c6-c67f-45f2-be21-251c97a44a7e-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n5lc2\" (UID: \"9b0ec9c6-c67f-45f2-be21-251c97a44a7e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5lc2" Dec 05 19:39:22 crc kubenswrapper[4828]: I1205 19:39:22.930390 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b0ec9c6-c67f-45f2-be21-251c97a44a7e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n5lc2\" (UID: \"9b0ec9c6-c67f-45f2-be21-251c97a44a7e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5lc2" Dec 05 19:39:23 crc kubenswrapper[4828]: I1205 19:39:23.032231 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7nb9\" (UniqueName: \"kubernetes.io/projected/9b0ec9c6-c67f-45f2-be21-251c97a44a7e-kube-api-access-p7nb9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n5lc2\" (UID: \"9b0ec9c6-c67f-45f2-be21-251c97a44a7e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5lc2" Dec 05 19:39:23 crc kubenswrapper[4828]: I1205 19:39:23.032550 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9b0ec9c6-c67f-45f2-be21-251c97a44a7e-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n5lc2\" (UID: \"9b0ec9c6-c67f-45f2-be21-251c97a44a7e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5lc2" Dec 05 19:39:23 crc kubenswrapper[4828]: I1205 19:39:23.032665 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b0ec9c6-c67f-45f2-be21-251c97a44a7e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n5lc2\" (UID: \"9b0ec9c6-c67f-45f2-be21-251c97a44a7e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5lc2" Dec 05 19:39:23 crc kubenswrapper[4828]: I1205 19:39:23.037719 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9b0ec9c6-c67f-45f2-be21-251c97a44a7e-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n5lc2\" (UID: \"9b0ec9c6-c67f-45f2-be21-251c97a44a7e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5lc2" Dec 05 19:39:23 crc kubenswrapper[4828]: I1205 19:39:23.049332 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b0ec9c6-c67f-45f2-be21-251c97a44a7e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n5lc2\" (UID: \"9b0ec9c6-c67f-45f2-be21-251c97a44a7e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5lc2" Dec 05 19:39:23 crc kubenswrapper[4828]: I1205 19:39:23.058638 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7nb9\" (UniqueName: \"kubernetes.io/projected/9b0ec9c6-c67f-45f2-be21-251c97a44a7e-kube-api-access-p7nb9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n5lc2\" (UID: \"9b0ec9c6-c67f-45f2-be21-251c97a44a7e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5lc2" Dec 05 19:39:23 crc kubenswrapper[4828]: I1205 19:39:23.100280 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5lc2" Dec 05 19:39:23 crc kubenswrapper[4828]: I1205 19:39:23.650066 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5lc2"] Dec 05 19:39:23 crc kubenswrapper[4828]: W1205 19:39:23.652529 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b0ec9c6_c67f_45f2_be21_251c97a44a7e.slice/crio-b64f7a45b3c39a044438b070e3603e49b635d48a6d16600ddb487b48d6f04222 WatchSource:0}: Error finding container b64f7a45b3c39a044438b070e3603e49b635d48a6d16600ddb487b48d6f04222: Status 404 returned error can't find the container with id b64f7a45b3c39a044438b070e3603e49b635d48a6d16600ddb487b48d6f04222 Dec 05 19:39:23 crc kubenswrapper[4828]: I1205 19:39:23.702604 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5lc2" event={"ID":"9b0ec9c6-c67f-45f2-be21-251c97a44a7e","Type":"ContainerStarted","Data":"b64f7a45b3c39a044438b070e3603e49b635d48a6d16600ddb487b48d6f04222"} Dec 05 19:39:24 crc kubenswrapper[4828]: I1205 19:39:24.715189 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5lc2" event={"ID":"9b0ec9c6-c67f-45f2-be21-251c97a44a7e","Type":"ContainerStarted","Data":"7d51ef55fe4f314de0414a81c53de9a531db7aae021c088e8c7cc8bd5578eb6f"} Dec 05 19:39:24 crc kubenswrapper[4828]: I1205 19:39:24.736950 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5lc2" podStartSLOduration=2.300994327 podStartE2EDuration="2.73693226s" podCreationTimestamp="2025-12-05 19:39:22 +0000 UTC" firstStartedPulling="2025-12-05 19:39:23.655238569 +0000 UTC m=+2141.550460875" lastFinishedPulling="2025-12-05 19:39:24.091176502 +0000 UTC m=+2141.986398808" observedRunningTime="2025-12-05 19:39:24.732235464 +0000 UTC m=+2142.627457770" watchObservedRunningTime="2025-12-05 19:39:24.73693226 +0000 UTC m=+2142.632154576" Dec 05 19:39:32 crc kubenswrapper[4828]: I1205 19:39:32.452070 4828 scope.go:117] "RemoveContainer" containerID="38e4dac2d6a881889bb348b79d1e1f1c0a83324dc2bf7d6fb1d1128a0cd7ea6d" Dec 05 19:39:32 crc kubenswrapper[4828]: E1205 19:39:32.452723 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:39:32 crc kubenswrapper[4828]: I1205 19:39:32.792501 4828 generic.go:334] "Generic (PLEG): container finished" podID="9b0ec9c6-c67f-45f2-be21-251c97a44a7e" containerID="7d51ef55fe4f314de0414a81c53de9a531db7aae021c088e8c7cc8bd5578eb6f" exitCode=0 Dec 05 19:39:32 crc kubenswrapper[4828]: I1205 19:39:32.792600 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5lc2" event={"ID":"9b0ec9c6-c67f-45f2-be21-251c97a44a7e","Type":"ContainerDied","Data":"7d51ef55fe4f314de0414a81c53de9a531db7aae021c088e8c7cc8bd5578eb6f"} Dec 05 19:39:34 crc kubenswrapper[4828]: I1205 19:39:34.208442 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5lc2" Dec 05 19:39:34 crc kubenswrapper[4828]: I1205 19:39:34.378186 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7nb9\" (UniqueName: \"kubernetes.io/projected/9b0ec9c6-c67f-45f2-be21-251c97a44a7e-kube-api-access-p7nb9\") pod \"9b0ec9c6-c67f-45f2-be21-251c97a44a7e\" (UID: \"9b0ec9c6-c67f-45f2-be21-251c97a44a7e\") " Dec 05 19:39:34 crc kubenswrapper[4828]: I1205 19:39:34.378343 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b0ec9c6-c67f-45f2-be21-251c97a44a7e-inventory\") pod \"9b0ec9c6-c67f-45f2-be21-251c97a44a7e\" (UID: \"9b0ec9c6-c67f-45f2-be21-251c97a44a7e\") " Dec 05 19:39:34 crc kubenswrapper[4828]: I1205 19:39:34.378472 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9b0ec9c6-c67f-45f2-be21-251c97a44a7e-ssh-key\") pod \"9b0ec9c6-c67f-45f2-be21-251c97a44a7e\" (UID: \"9b0ec9c6-c67f-45f2-be21-251c97a44a7e\") " Dec 05 19:39:34 crc kubenswrapper[4828]: I1205 19:39:34.386148 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b0ec9c6-c67f-45f2-be21-251c97a44a7e-kube-api-access-p7nb9" (OuterVolumeSpecName: "kube-api-access-p7nb9") pod "9b0ec9c6-c67f-45f2-be21-251c97a44a7e" (UID: "9b0ec9c6-c67f-45f2-be21-251c97a44a7e"). InnerVolumeSpecName "kube-api-access-p7nb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:39:34 crc kubenswrapper[4828]: I1205 19:39:34.407689 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b0ec9c6-c67f-45f2-be21-251c97a44a7e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9b0ec9c6-c67f-45f2-be21-251c97a44a7e" (UID: "9b0ec9c6-c67f-45f2-be21-251c97a44a7e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:39:34 crc kubenswrapper[4828]: I1205 19:39:34.417346 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b0ec9c6-c67f-45f2-be21-251c97a44a7e-inventory" (OuterVolumeSpecName: "inventory") pod "9b0ec9c6-c67f-45f2-be21-251c97a44a7e" (UID: "9b0ec9c6-c67f-45f2-be21-251c97a44a7e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:39:34 crc kubenswrapper[4828]: I1205 19:39:34.481495 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b0ec9c6-c67f-45f2-be21-251c97a44a7e-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 19:39:34 crc kubenswrapper[4828]: I1205 19:39:34.481811 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9b0ec9c6-c67f-45f2-be21-251c97a44a7e-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 19:39:34 crc kubenswrapper[4828]: I1205 19:39:34.481867 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7nb9\" (UniqueName: \"kubernetes.io/projected/9b0ec9c6-c67f-45f2-be21-251c97a44a7e-kube-api-access-p7nb9\") on node \"crc\" DevicePath \"\"" Dec 05 19:39:34 crc kubenswrapper[4828]: I1205 19:39:34.812353 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5lc2" event={"ID":"9b0ec9c6-c67f-45f2-be21-251c97a44a7e","Type":"ContainerDied","Data":"b64f7a45b3c39a044438b070e3603e49b635d48a6d16600ddb487b48d6f04222"} Dec 05 19:39:34 crc kubenswrapper[4828]: I1205 19:39:34.812404 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b64f7a45b3c39a044438b070e3603e49b635d48a6d16600ddb487b48d6f04222" Dec 05 19:39:34 crc kubenswrapper[4828]: I1205 19:39:34.812481 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5lc2" Dec 05 19:39:34 crc kubenswrapper[4828]: I1205 19:39:34.901923 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g"] Dec 05 19:39:34 crc kubenswrapper[4828]: E1205 19:39:34.902417 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0ec9c6-c67f-45f2-be21-251c97a44a7e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 05 19:39:34 crc kubenswrapper[4828]: I1205 19:39:34.902439 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0ec9c6-c67f-45f2-be21-251c97a44a7e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 05 19:39:34 crc kubenswrapper[4828]: I1205 19:39:34.902665 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b0ec9c6-c67f-45f2-be21-251c97a44a7e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 05 19:39:34 crc kubenswrapper[4828]: I1205 19:39:34.903429 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g" Dec 05 19:39:34 crc kubenswrapper[4828]: I1205 19:39:34.908165 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 19:39:34 crc kubenswrapper[4828]: I1205 19:39:34.908468 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 19:39:34 crc kubenswrapper[4828]: I1205 19:39:34.908636 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9rkjj" Dec 05 19:39:34 crc kubenswrapper[4828]: I1205 19:39:34.908786 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 19:39:34 crc kubenswrapper[4828]: I1205 19:39:34.922568 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g"] Dec 05 19:39:35 crc kubenswrapper[4828]: I1205 19:39:35.094657 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw5wd\" (UniqueName: \"kubernetes.io/projected/f51d93aa-b89c-4da8-b091-8a9888820e61-kube-api-access-fw5wd\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g\" (UID: \"f51d93aa-b89c-4da8-b091-8a9888820e61\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g" Dec 05 19:39:35 crc kubenswrapper[4828]: I1205 19:39:35.094729 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f51d93aa-b89c-4da8-b091-8a9888820e61-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g\" (UID: \"f51d93aa-b89c-4da8-b091-8a9888820e61\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g" Dec 05 19:39:35 crc kubenswrapper[4828]: I1205 19:39:35.094929 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f51d93aa-b89c-4da8-b091-8a9888820e61-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g\" (UID: \"f51d93aa-b89c-4da8-b091-8a9888820e61\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g" Dec 05 19:39:35 crc kubenswrapper[4828]: I1205 19:39:35.197161 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw5wd\" (UniqueName: \"kubernetes.io/projected/f51d93aa-b89c-4da8-b091-8a9888820e61-kube-api-access-fw5wd\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g\" (UID: \"f51d93aa-b89c-4da8-b091-8a9888820e61\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g" Dec 05 19:39:35 crc kubenswrapper[4828]: I1205 19:39:35.197235 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f51d93aa-b89c-4da8-b091-8a9888820e61-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g\" (UID: \"f51d93aa-b89c-4da8-b091-8a9888820e61\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g" Dec 05 19:39:35 crc kubenswrapper[4828]: I1205 19:39:35.197273 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f51d93aa-b89c-4da8-b091-8a9888820e61-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g\" (UID: \"f51d93aa-b89c-4da8-b091-8a9888820e61\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g" Dec 05 19:39:35 crc kubenswrapper[4828]: I1205 19:39:35.205285 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f51d93aa-b89c-4da8-b091-8a9888820e61-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g\" (UID: \"f51d93aa-b89c-4da8-b091-8a9888820e61\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g" Dec 05 19:39:35 crc kubenswrapper[4828]: I1205 19:39:35.206124 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f51d93aa-b89c-4da8-b091-8a9888820e61-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g\" (UID: \"f51d93aa-b89c-4da8-b091-8a9888820e61\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g" Dec 05 19:39:35 crc kubenswrapper[4828]: I1205 19:39:35.215956 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw5wd\" (UniqueName: \"kubernetes.io/projected/f51d93aa-b89c-4da8-b091-8a9888820e61-kube-api-access-fw5wd\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g\" (UID: \"f51d93aa-b89c-4da8-b091-8a9888820e61\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g" Dec 05 19:39:35 crc kubenswrapper[4828]: I1205 19:39:35.218850 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g" Dec 05 19:39:35 crc kubenswrapper[4828]: I1205 19:39:35.543645 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g"] Dec 05 19:39:35 crc kubenswrapper[4828]: W1205 19:39:35.551240 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf51d93aa_b89c_4da8_b091_8a9888820e61.slice/crio-0fe9298dd1c9780120e46b9d3b8bea8ad5d69dfcdd74b51e88319dac8bf0ee4b WatchSource:0}: Error finding container 0fe9298dd1c9780120e46b9d3b8bea8ad5d69dfcdd74b51e88319dac8bf0ee4b: Status 404 returned error can't find the container with id 0fe9298dd1c9780120e46b9d3b8bea8ad5d69dfcdd74b51e88319dac8bf0ee4b Dec 05 19:39:35 crc kubenswrapper[4828]: I1205 19:39:35.822339 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g" event={"ID":"f51d93aa-b89c-4da8-b091-8a9888820e61","Type":"ContainerStarted","Data":"0fe9298dd1c9780120e46b9d3b8bea8ad5d69dfcdd74b51e88319dac8bf0ee4b"} Dec 05 19:39:36 crc kubenswrapper[4828]: I1205 19:39:36.832183 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g" event={"ID":"f51d93aa-b89c-4da8-b091-8a9888820e61","Type":"ContainerStarted","Data":"a734432215a500f03b385b497f6927bffd9317f11ff35b189273364be38d2b50"} Dec 05 19:39:36 crc kubenswrapper[4828]: I1205 19:39:36.851175 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g" podStartSLOduration=2.185884554 podStartE2EDuration="2.851156268s" podCreationTimestamp="2025-12-05 19:39:34 +0000 UTC" firstStartedPulling="2025-12-05 19:39:35.552843008 +0000 UTC m=+2153.448065334" lastFinishedPulling="2025-12-05 19:39:36.218114732 +0000 UTC m=+2154.113337048" observedRunningTime="2025-12-05 19:39:36.847931731 +0000 UTC m=+2154.743154037" watchObservedRunningTime="2025-12-05 19:39:36.851156268 +0000 UTC m=+2154.746378574" Dec 05 19:39:43 crc kubenswrapper[4828]: I1205 19:39:43.446718 4828 scope.go:117] "RemoveContainer" containerID="38e4dac2d6a881889bb348b79d1e1f1c0a83324dc2bf7d6fb1d1128a0cd7ea6d" Dec 05 19:39:43 crc kubenswrapper[4828]: E1205 19:39:43.448166 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:39:46 crc kubenswrapper[4828]: I1205 19:39:46.912852 4828 generic.go:334] "Generic (PLEG): container finished" podID="f51d93aa-b89c-4da8-b091-8a9888820e61" containerID="a734432215a500f03b385b497f6927bffd9317f11ff35b189273364be38d2b50" exitCode=0 Dec 05 19:39:46 crc kubenswrapper[4828]: I1205 19:39:46.912907 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g" event={"ID":"f51d93aa-b89c-4da8-b091-8a9888820e61","Type":"ContainerDied","Data":"a734432215a500f03b385b497f6927bffd9317f11ff35b189273364be38d2b50"} Dec 05 19:39:48 crc kubenswrapper[4828]: I1205 19:39:48.307889 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g" Dec 05 19:39:48 crc kubenswrapper[4828]: I1205 19:39:48.362924 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f51d93aa-b89c-4da8-b091-8a9888820e61-ssh-key\") pod \"f51d93aa-b89c-4da8-b091-8a9888820e61\" (UID: \"f51d93aa-b89c-4da8-b091-8a9888820e61\") " Dec 05 19:39:48 crc kubenswrapper[4828]: I1205 19:39:48.363051 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f51d93aa-b89c-4da8-b091-8a9888820e61-inventory\") pod \"f51d93aa-b89c-4da8-b091-8a9888820e61\" (UID: \"f51d93aa-b89c-4da8-b091-8a9888820e61\") " Dec 05 19:39:48 crc kubenswrapper[4828]: I1205 19:39:48.363140 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw5wd\" (UniqueName: \"kubernetes.io/projected/f51d93aa-b89c-4da8-b091-8a9888820e61-kube-api-access-fw5wd\") pod \"f51d93aa-b89c-4da8-b091-8a9888820e61\" (UID: \"f51d93aa-b89c-4da8-b091-8a9888820e61\") " Dec 05 19:39:48 crc kubenswrapper[4828]: I1205 19:39:48.373136 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f51d93aa-b89c-4da8-b091-8a9888820e61-kube-api-access-fw5wd" (OuterVolumeSpecName: "kube-api-access-fw5wd") pod "f51d93aa-b89c-4da8-b091-8a9888820e61" (UID: "f51d93aa-b89c-4da8-b091-8a9888820e61"). InnerVolumeSpecName "kube-api-access-fw5wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:39:48 crc kubenswrapper[4828]: I1205 19:39:48.404855 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f51d93aa-b89c-4da8-b091-8a9888820e61-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f51d93aa-b89c-4da8-b091-8a9888820e61" (UID: "f51d93aa-b89c-4da8-b091-8a9888820e61"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:39:48 crc kubenswrapper[4828]: I1205 19:39:48.422066 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f51d93aa-b89c-4da8-b091-8a9888820e61-inventory" (OuterVolumeSpecName: "inventory") pod "f51d93aa-b89c-4da8-b091-8a9888820e61" (UID: "f51d93aa-b89c-4da8-b091-8a9888820e61"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:39:48 crc kubenswrapper[4828]: I1205 19:39:48.469469 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f51d93aa-b89c-4da8-b091-8a9888820e61-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 19:39:48 crc kubenswrapper[4828]: I1205 19:39:48.469498 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f51d93aa-b89c-4da8-b091-8a9888820e61-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 19:39:48 crc kubenswrapper[4828]: I1205 19:39:48.469510 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw5wd\" (UniqueName: \"kubernetes.io/projected/f51d93aa-b89c-4da8-b091-8a9888820e61-kube-api-access-fw5wd\") on node \"crc\" DevicePath \"\"" Dec 05 19:39:48 crc kubenswrapper[4828]: I1205 19:39:48.940584 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g" event={"ID":"f51d93aa-b89c-4da8-b091-8a9888820e61","Type":"ContainerDied","Data":"0fe9298dd1c9780120e46b9d3b8bea8ad5d69dfcdd74b51e88319dac8bf0ee4b"} Dec 05 19:39:48 crc kubenswrapper[4828]: I1205 19:39:48.940941 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fe9298dd1c9780120e46b9d3b8bea8ad5d69dfcdd74b51e88319dac8bf0ee4b" Dec 05 19:39:48 crc kubenswrapper[4828]: I1205 19:39:48.940658 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.014540 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8"] Dec 05 19:39:49 crc kubenswrapper[4828]: E1205 19:39:49.015115 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51d93aa-b89c-4da8-b091-8a9888820e61" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.015141 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51d93aa-b89c-4da8-b091-8a9888820e61" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.015362 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="f51d93aa-b89c-4da8-b091-8a9888820e61" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.016197 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.018860 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.018898 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.019014 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.019124 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.019187 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9rkjj" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.019334 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.021428 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.021635 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.035743 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8"] Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.084105 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.084191 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs44n\" (UniqueName: \"kubernetes.io/projected/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-kube-api-access-zs44n\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.084220 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.084248 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.084278 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.084306 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.084350 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.084414 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.084468 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.084578 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.084623 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.084666 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.084701 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.084734 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.186937 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.187228 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs44n\" (UniqueName: \"kubernetes.io/projected/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-kube-api-access-zs44n\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.187371 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.187487 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.187619 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.187723 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.187970 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.188122 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.188256 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.188365 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.188470 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.188583 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.188804 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.189073 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.192319 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.193011 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.193635 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.199717 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.200225 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.200345 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.201946 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.208310 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.208744 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.209445 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.209475 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs44n\" (UniqueName: \"kubernetes.io/projected/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-kube-api-access-zs44n\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.209623 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.210784 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.218622 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.333216 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.885245 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8"] Dec 05 19:39:49 crc kubenswrapper[4828]: I1205 19:39:49.948475 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" event={"ID":"ab033631-5ea0-4fce-a4e3-3f0c390f07ac","Type":"ContainerStarted","Data":"7567ae55160b7a62be68b6c977dcc690d2b559359fe7a39b0b05318fd7b2bd6e"} Dec 05 19:39:50 crc kubenswrapper[4828]: I1205 19:39:50.958272 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" event={"ID":"ab033631-5ea0-4fce-a4e3-3f0c390f07ac","Type":"ContainerStarted","Data":"489a5b44af04d4e1f38c26b367928a01194db75091b5ce916c59817f8fbe84a5"} Dec 05 19:39:50 crc kubenswrapper[4828]: I1205 19:39:50.974230 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" podStartSLOduration=2.542619857 podStartE2EDuration="2.974194301s" podCreationTimestamp="2025-12-05 19:39:48 +0000 UTC" firstStartedPulling="2025-12-05 19:39:49.884605328 +0000 UTC m=+2167.779827634" lastFinishedPulling="2025-12-05 19:39:50.316179782 +0000 UTC m=+2168.211402078" observedRunningTime="2025-12-05 19:39:50.973654207 +0000 UTC m=+2168.868876543" watchObservedRunningTime="2025-12-05 19:39:50.974194301 +0000 UTC m=+2168.869416617" Dec 05 19:39:54 crc kubenswrapper[4828]: I1205 19:39:54.446308 4828 scope.go:117] "RemoveContainer" containerID="38e4dac2d6a881889bb348b79d1e1f1c0a83324dc2bf7d6fb1d1128a0cd7ea6d" Dec 05 19:39:54 crc kubenswrapper[4828]: E1205 19:39:54.446753 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:40:05 crc kubenswrapper[4828]: I1205 19:40:05.259606 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:40:05 crc kubenswrapper[4828]: I1205 19:40:05.260126 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:40:07 crc kubenswrapper[4828]: I1205 19:40:07.446267 4828 scope.go:117] "RemoveContainer" containerID="38e4dac2d6a881889bb348b79d1e1f1c0a83324dc2bf7d6fb1d1128a0cd7ea6d" Dec 05 19:40:07 crc kubenswrapper[4828]: E1205 19:40:07.446506 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:40:18 crc kubenswrapper[4828]: I1205 19:40:18.448101 4828 scope.go:117] "RemoveContainer" containerID="38e4dac2d6a881889bb348b79d1e1f1c0a83324dc2bf7d6fb1d1128a0cd7ea6d" Dec 05 19:40:18 crc kubenswrapper[4828]: E1205 19:40:18.448794 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:40:24 crc kubenswrapper[4828]: I1205 19:40:24.816732 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jb7vj"] Dec 05 19:40:24 crc kubenswrapper[4828]: I1205 19:40:24.819889 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jb7vj" Dec 05 19:40:24 crc kubenswrapper[4828]: I1205 19:40:24.826504 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jb7vj"] Dec 05 19:40:24 crc kubenswrapper[4828]: I1205 19:40:24.923387 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp566\" (UniqueName: \"kubernetes.io/projected/f125710c-805e-4584-94a8-811943d6907c-kube-api-access-xp566\") pod \"community-operators-jb7vj\" (UID: \"f125710c-805e-4584-94a8-811943d6907c\") " pod="openshift-marketplace/community-operators-jb7vj" Dec 05 19:40:24 crc kubenswrapper[4828]: I1205 19:40:24.923468 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f125710c-805e-4584-94a8-811943d6907c-catalog-content\") pod \"community-operators-jb7vj\" (UID: \"f125710c-805e-4584-94a8-811943d6907c\") " pod="openshift-marketplace/community-operators-jb7vj" Dec 05 19:40:24 crc kubenswrapper[4828]: I1205 19:40:24.923533 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f125710c-805e-4584-94a8-811943d6907c-utilities\") pod \"community-operators-jb7vj\" (UID: \"f125710c-805e-4584-94a8-811943d6907c\") " pod="openshift-marketplace/community-operators-jb7vj" Dec 05 19:40:25 crc kubenswrapper[4828]: I1205 19:40:25.024903 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp566\" (UniqueName: \"kubernetes.io/projected/f125710c-805e-4584-94a8-811943d6907c-kube-api-access-xp566\") pod \"community-operators-jb7vj\" (UID: \"f125710c-805e-4584-94a8-811943d6907c\") " pod="openshift-marketplace/community-operators-jb7vj" Dec 05 19:40:25 crc kubenswrapper[4828]: I1205 19:40:25.024983 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f125710c-805e-4584-94a8-811943d6907c-catalog-content\") pod \"community-operators-jb7vj\" (UID: \"f125710c-805e-4584-94a8-811943d6907c\") " pod="openshift-marketplace/community-operators-jb7vj" Dec 05 19:40:25 crc kubenswrapper[4828]: I1205 19:40:25.025042 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f125710c-805e-4584-94a8-811943d6907c-utilities\") pod \"community-operators-jb7vj\" (UID: \"f125710c-805e-4584-94a8-811943d6907c\") " pod="openshift-marketplace/community-operators-jb7vj" Dec 05 19:40:25 crc kubenswrapper[4828]: I1205 19:40:25.025544 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f125710c-805e-4584-94a8-811943d6907c-catalog-content\") pod \"community-operators-jb7vj\" (UID: \"f125710c-805e-4584-94a8-811943d6907c\") " pod="openshift-marketplace/community-operators-jb7vj" Dec 05 19:40:25 crc kubenswrapper[4828]: I1205 19:40:25.025584 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f125710c-805e-4584-94a8-811943d6907c-utilities\") pod \"community-operators-jb7vj\" (UID: \"f125710c-805e-4584-94a8-811943d6907c\") " pod="openshift-marketplace/community-operators-jb7vj" Dec 05 19:40:25 crc kubenswrapper[4828]: I1205 19:40:25.048007 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp566\" (UniqueName: \"kubernetes.io/projected/f125710c-805e-4584-94a8-811943d6907c-kube-api-access-xp566\") pod \"community-operators-jb7vj\" (UID: \"f125710c-805e-4584-94a8-811943d6907c\") " pod="openshift-marketplace/community-operators-jb7vj" Dec 05 19:40:25 crc kubenswrapper[4828]: I1205 19:40:25.139484 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jb7vj" Dec 05 19:40:25 crc kubenswrapper[4828]: I1205 19:40:25.677753 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jb7vj"] Dec 05 19:40:26 crc kubenswrapper[4828]: I1205 19:40:26.324436 4828 generic.go:334] "Generic (PLEG): container finished" podID="f125710c-805e-4584-94a8-811943d6907c" containerID="3b90e2354b88b7b22481143035ac7e8fdc25f5685bea076b1fcd3828b9073762" exitCode=0 Dec 05 19:40:26 crc kubenswrapper[4828]: I1205 19:40:26.324809 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jb7vj" event={"ID":"f125710c-805e-4584-94a8-811943d6907c","Type":"ContainerDied","Data":"3b90e2354b88b7b22481143035ac7e8fdc25f5685bea076b1fcd3828b9073762"} Dec 05 19:40:26 crc kubenswrapper[4828]: I1205 19:40:26.324848 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jb7vj" event={"ID":"f125710c-805e-4584-94a8-811943d6907c","Type":"ContainerStarted","Data":"e336e1cc550b110d4dafd5314f808439fc216a50d4217f42ad72bc4a77e99380"} Dec 05 19:40:27 crc kubenswrapper[4828]: I1205 19:40:27.334376 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jb7vj" event={"ID":"f125710c-805e-4584-94a8-811943d6907c","Type":"ContainerStarted","Data":"05138fc92804929dd60d74566691693e43dfb1c8db628637cd8ee3855feb7485"} Dec 05 19:40:28 crc kubenswrapper[4828]: I1205 19:40:28.346422 4828 generic.go:334] "Generic (PLEG): container finished" podID="f125710c-805e-4584-94a8-811943d6907c" containerID="05138fc92804929dd60d74566691693e43dfb1c8db628637cd8ee3855feb7485" exitCode=0 Dec 05 19:40:28 crc kubenswrapper[4828]: I1205 19:40:28.346666 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jb7vj" event={"ID":"f125710c-805e-4584-94a8-811943d6907c","Type":"ContainerDied","Data":"05138fc92804929dd60d74566691693e43dfb1c8db628637cd8ee3855feb7485"} Dec 05 19:40:29 crc kubenswrapper[4828]: I1205 19:40:29.356432 4828 generic.go:334] "Generic (PLEG): container finished" podID="ab033631-5ea0-4fce-a4e3-3f0c390f07ac" containerID="489a5b44af04d4e1f38c26b367928a01194db75091b5ce916c59817f8fbe84a5" exitCode=0 Dec 05 19:40:29 crc kubenswrapper[4828]: I1205 19:40:29.356522 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" event={"ID":"ab033631-5ea0-4fce-a4e3-3f0c390f07ac","Type":"ContainerDied","Data":"489a5b44af04d4e1f38c26b367928a01194db75091b5ce916c59817f8fbe84a5"} Dec 05 19:40:29 crc kubenswrapper[4828]: I1205 19:40:29.360183 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jb7vj" event={"ID":"f125710c-805e-4584-94a8-811943d6907c","Type":"ContainerStarted","Data":"f5e9e1da72352c1472c5b80fe0891046e719d3b7b96eab7a544b551f9cdf339c"} Dec 05 19:40:29 crc kubenswrapper[4828]: I1205 19:40:29.397498 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jb7vj" podStartSLOduration=2.90893412 podStartE2EDuration="5.397474811s" podCreationTimestamp="2025-12-05 19:40:24 +0000 UTC" firstStartedPulling="2025-12-05 19:40:26.326727921 +0000 UTC m=+2204.221950237" lastFinishedPulling="2025-12-05 19:40:28.815268612 +0000 UTC m=+2206.710490928" observedRunningTime="2025-12-05 19:40:29.395321123 +0000 UTC m=+2207.290543429" watchObservedRunningTime="2025-12-05 19:40:29.397474811 +0000 UTC m=+2207.292697127" Dec 05 19:40:30 crc kubenswrapper[4828]: I1205 19:40:30.447507 4828 scope.go:117] "RemoveContainer" containerID="38e4dac2d6a881889bb348b79d1e1f1c0a83324dc2bf7d6fb1d1128a0cd7ea6d" Dec 05 19:40:30 crc kubenswrapper[4828]: E1205 19:40:30.449515 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:40:30 crc kubenswrapper[4828]: I1205 19:40:30.833939 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:40:30 crc kubenswrapper[4828]: I1205 19:40:30.930535 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zs44n\" (UniqueName: \"kubernetes.io/projected/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-kube-api-access-zs44n\") pod \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " Dec 05 19:40:30 crc kubenswrapper[4828]: I1205 19:40:30.930587 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-bootstrap-combined-ca-bundle\") pod \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " Dec 05 19:40:30 crc kubenswrapper[4828]: I1205 19:40:30.930644 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " Dec 05 19:40:30 crc kubenswrapper[4828]: I1205 19:40:30.930665 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-telemetry-combined-ca-bundle\") pod \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " Dec 05 19:40:30 crc kubenswrapper[4828]: I1205 19:40:30.930683 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-ovn-combined-ca-bundle\") pod \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " Dec 05 19:40:30 crc kubenswrapper[4828]: I1205 19:40:30.930712 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-neutron-metadata-combined-ca-bundle\") pod \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " Dec 05 19:40:30 crc kubenswrapper[4828]: I1205 19:40:30.930740 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-ssh-key\") pod \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " Dec 05 19:40:30 crc kubenswrapper[4828]: I1205 19:40:30.930764 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-inventory\") pod \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " Dec 05 19:40:30 crc kubenswrapper[4828]: I1205 19:40:30.930787 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " Dec 05 19:40:30 crc kubenswrapper[4828]: I1205 19:40:30.930806 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-nova-combined-ca-bundle\") pod \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " Dec 05 19:40:30 crc kubenswrapper[4828]: I1205 19:40:30.930857 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " Dec 05 19:40:30 crc kubenswrapper[4828]: I1205 19:40:30.930882 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-libvirt-combined-ca-bundle\") pod \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " Dec 05 19:40:30 crc kubenswrapper[4828]: I1205 19:40:30.930905 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-openstack-edpm-ipam-ovn-default-certs-0\") pod \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " Dec 05 19:40:30 crc kubenswrapper[4828]: I1205 19:40:30.930941 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-repo-setup-combined-ca-bundle\") pod \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\" (UID: \"ab033631-5ea0-4fce-a4e3-3f0c390f07ac\") " Dec 05 19:40:30 crc kubenswrapper[4828]: I1205 19:40:30.938020 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "ab033631-5ea0-4fce-a4e3-3f0c390f07ac" (UID: "ab033631-5ea0-4fce-a4e3-3f0c390f07ac"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:40:30 crc kubenswrapper[4828]: I1205 19:40:30.938573 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "ab033631-5ea0-4fce-a4e3-3f0c390f07ac" (UID: "ab033631-5ea0-4fce-a4e3-3f0c390f07ac"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:40:30 crc kubenswrapper[4828]: I1205 19:40:30.938758 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "ab033631-5ea0-4fce-a4e3-3f0c390f07ac" (UID: "ab033631-5ea0-4fce-a4e3-3f0c390f07ac"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:40:30 crc kubenswrapper[4828]: I1205 19:40:30.939721 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "ab033631-5ea0-4fce-a4e3-3f0c390f07ac" (UID: "ab033631-5ea0-4fce-a4e3-3f0c390f07ac"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:40:30 crc kubenswrapper[4828]: I1205 19:40:30.940049 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "ab033631-5ea0-4fce-a4e3-3f0c390f07ac" (UID: "ab033631-5ea0-4fce-a4e3-3f0c390f07ac"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:40:30 crc kubenswrapper[4828]: I1205 19:40:30.940060 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "ab033631-5ea0-4fce-a4e3-3f0c390f07ac" (UID: "ab033631-5ea0-4fce-a4e3-3f0c390f07ac"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:40:30 crc kubenswrapper[4828]: I1205 19:40:30.940094 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-kube-api-access-zs44n" (OuterVolumeSpecName: "kube-api-access-zs44n") pod "ab033631-5ea0-4fce-a4e3-3f0c390f07ac" (UID: "ab033631-5ea0-4fce-a4e3-3f0c390f07ac"). InnerVolumeSpecName "kube-api-access-zs44n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:40:30 crc kubenswrapper[4828]: I1205 19:40:30.941249 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "ab033631-5ea0-4fce-a4e3-3f0c390f07ac" (UID: "ab033631-5ea0-4fce-a4e3-3f0c390f07ac"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:40:30 crc kubenswrapper[4828]: I1205 19:40:30.941273 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "ab033631-5ea0-4fce-a4e3-3f0c390f07ac" (UID: "ab033631-5ea0-4fce-a4e3-3f0c390f07ac"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:40:30 crc kubenswrapper[4828]: I1205 19:40:30.942836 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "ab033631-5ea0-4fce-a4e3-3f0c390f07ac" (UID: "ab033631-5ea0-4fce-a4e3-3f0c390f07ac"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:40:30 crc kubenswrapper[4828]: I1205 19:40:30.950289 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ab033631-5ea0-4fce-a4e3-3f0c390f07ac" (UID: "ab033631-5ea0-4fce-a4e3-3f0c390f07ac"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:40:30 crc kubenswrapper[4828]: I1205 19:40:30.950424 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "ab033631-5ea0-4fce-a4e3-3f0c390f07ac" (UID: "ab033631-5ea0-4fce-a4e3-3f0c390f07ac"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:40:30 crc kubenswrapper[4828]: I1205 19:40:30.963111 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ab033631-5ea0-4fce-a4e3-3f0c390f07ac" (UID: "ab033631-5ea0-4fce-a4e3-3f0c390f07ac"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:40:30 crc kubenswrapper[4828]: I1205 19:40:30.971154 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-inventory" (OuterVolumeSpecName: "inventory") pod "ab033631-5ea0-4fce-a4e3-3f0c390f07ac" (UID: "ab033631-5ea0-4fce-a4e3-3f0c390f07ac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.032363 4828 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.032397 4828 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.032411 4828 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.032424 4828 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.032437 4828 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.032450 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.032459 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.032466 4828 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.032477 4828 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.032486 4828 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.032496 4828 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.032504 4828 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.032514 4828 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.032523 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zs44n\" (UniqueName: \"kubernetes.io/projected/ab033631-5ea0-4fce-a4e3-3f0c390f07ac-kube-api-access-zs44n\") on node \"crc\" DevicePath \"\"" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.379769 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" event={"ID":"ab033631-5ea0-4fce-a4e3-3f0c390f07ac","Type":"ContainerDied","Data":"7567ae55160b7a62be68b6c977dcc690d2b559359fe7a39b0b05318fd7b2bd6e"} Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.380362 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7567ae55160b7a62be68b6c977dcc690d2b559359fe7a39b0b05318fd7b2bd6e" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.379901 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.634102 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-57rf6"] Dec 05 19:40:31 crc kubenswrapper[4828]: E1205 19:40:31.635042 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab033631-5ea0-4fce-a4e3-3f0c390f07ac" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.635081 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab033631-5ea0-4fce-a4e3-3f0c390f07ac" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.635646 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab033631-5ea0-4fce-a4e3-3f0c390f07ac" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.637251 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57rf6" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.641253 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9rkjj" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.641605 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.644102 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.645919 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-57rf6"] Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.646006 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.663604 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.745760 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ce437eb-13b3-49a9-adcf-874e3e672a8c-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57rf6\" (UID: \"0ce437eb-13b3-49a9-adcf-874e3e672a8c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57rf6" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.745842 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rj5j\" (UniqueName: \"kubernetes.io/projected/0ce437eb-13b3-49a9-adcf-874e3e672a8c-kube-api-access-8rj5j\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57rf6\" (UID: \"0ce437eb-13b3-49a9-adcf-874e3e672a8c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57rf6" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.745958 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0ce437eb-13b3-49a9-adcf-874e3e672a8c-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57rf6\" (UID: \"0ce437eb-13b3-49a9-adcf-874e3e672a8c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57rf6" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.746239 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0ce437eb-13b3-49a9-adcf-874e3e672a8c-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57rf6\" (UID: \"0ce437eb-13b3-49a9-adcf-874e3e672a8c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57rf6" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.746636 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ce437eb-13b3-49a9-adcf-874e3e672a8c-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57rf6\" (UID: \"0ce437eb-13b3-49a9-adcf-874e3e672a8c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57rf6" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.849533 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0ce437eb-13b3-49a9-adcf-874e3e672a8c-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57rf6\" (UID: \"0ce437eb-13b3-49a9-adcf-874e3e672a8c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57rf6" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.849604 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ce437eb-13b3-49a9-adcf-874e3e672a8c-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57rf6\" (UID: \"0ce437eb-13b3-49a9-adcf-874e3e672a8c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57rf6" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.849706 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ce437eb-13b3-49a9-adcf-874e3e672a8c-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57rf6\" (UID: \"0ce437eb-13b3-49a9-adcf-874e3e672a8c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57rf6" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.849883 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rj5j\" (UniqueName: \"kubernetes.io/projected/0ce437eb-13b3-49a9-adcf-874e3e672a8c-kube-api-access-8rj5j\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57rf6\" (UID: \"0ce437eb-13b3-49a9-adcf-874e3e672a8c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57rf6" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.850083 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0ce437eb-13b3-49a9-adcf-874e3e672a8c-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57rf6\" (UID: \"0ce437eb-13b3-49a9-adcf-874e3e672a8c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57rf6" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.850577 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0ce437eb-13b3-49a9-adcf-874e3e672a8c-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57rf6\" (UID: \"0ce437eb-13b3-49a9-adcf-874e3e672a8c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57rf6" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.858404 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0ce437eb-13b3-49a9-adcf-874e3e672a8c-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57rf6\" (UID: \"0ce437eb-13b3-49a9-adcf-874e3e672a8c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57rf6" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.858399 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ce437eb-13b3-49a9-adcf-874e3e672a8c-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57rf6\" (UID: \"0ce437eb-13b3-49a9-adcf-874e3e672a8c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57rf6" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.861016 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ce437eb-13b3-49a9-adcf-874e3e672a8c-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57rf6\" (UID: \"0ce437eb-13b3-49a9-adcf-874e3e672a8c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57rf6" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.869184 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rj5j\" (UniqueName: \"kubernetes.io/projected/0ce437eb-13b3-49a9-adcf-874e3e672a8c-kube-api-access-8rj5j\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57rf6\" (UID: \"0ce437eb-13b3-49a9-adcf-874e3e672a8c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57rf6" Dec 05 19:40:31 crc kubenswrapper[4828]: I1205 19:40:31.975620 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57rf6" Dec 05 19:40:32 crc kubenswrapper[4828]: I1205 19:40:32.578924 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-57rf6"] Dec 05 19:40:32 crc kubenswrapper[4828]: I1205 19:40:32.653997 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w6lhs"] Dec 05 19:40:32 crc kubenswrapper[4828]: I1205 19:40:32.690268 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w6lhs"] Dec 05 19:40:32 crc kubenswrapper[4828]: I1205 19:40:32.690583 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6lhs" Dec 05 19:40:32 crc kubenswrapper[4828]: I1205 19:40:32.787025 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bbe4327-94ac-433e-ab44-a682c0472459-catalog-content\") pod \"certified-operators-w6lhs\" (UID: \"8bbe4327-94ac-433e-ab44-a682c0472459\") " pod="openshift-marketplace/certified-operators-w6lhs" Dec 05 19:40:32 crc kubenswrapper[4828]: I1205 19:40:32.787087 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzf9m\" (UniqueName: \"kubernetes.io/projected/8bbe4327-94ac-433e-ab44-a682c0472459-kube-api-access-hzf9m\") pod \"certified-operators-w6lhs\" (UID: \"8bbe4327-94ac-433e-ab44-a682c0472459\") " pod="openshift-marketplace/certified-operators-w6lhs" Dec 05 19:40:32 crc kubenswrapper[4828]: I1205 19:40:32.787258 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bbe4327-94ac-433e-ab44-a682c0472459-utilities\") pod \"certified-operators-w6lhs\" (UID: \"8bbe4327-94ac-433e-ab44-a682c0472459\") " pod="openshift-marketplace/certified-operators-w6lhs" Dec 05 19:40:32 crc kubenswrapper[4828]: I1205 19:40:32.888931 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bbe4327-94ac-433e-ab44-a682c0472459-utilities\") pod \"certified-operators-w6lhs\" (UID: \"8bbe4327-94ac-433e-ab44-a682c0472459\") " pod="openshift-marketplace/certified-operators-w6lhs" Dec 05 19:40:32 crc kubenswrapper[4828]: I1205 19:40:32.889299 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bbe4327-94ac-433e-ab44-a682c0472459-catalog-content\") pod \"certified-operators-w6lhs\" (UID: \"8bbe4327-94ac-433e-ab44-a682c0472459\") " pod="openshift-marketplace/certified-operators-w6lhs" Dec 05 19:40:32 crc kubenswrapper[4828]: I1205 19:40:32.889446 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzf9m\" (UniqueName: \"kubernetes.io/projected/8bbe4327-94ac-433e-ab44-a682c0472459-kube-api-access-hzf9m\") pod \"certified-operators-w6lhs\" (UID: \"8bbe4327-94ac-433e-ab44-a682c0472459\") " pod="openshift-marketplace/certified-operators-w6lhs" Dec 05 19:40:32 crc kubenswrapper[4828]: I1205 19:40:32.889455 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bbe4327-94ac-433e-ab44-a682c0472459-utilities\") pod \"certified-operators-w6lhs\" (UID: \"8bbe4327-94ac-433e-ab44-a682c0472459\") " pod="openshift-marketplace/certified-operators-w6lhs" Dec 05 19:40:32 crc kubenswrapper[4828]: I1205 19:40:32.889852 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bbe4327-94ac-433e-ab44-a682c0472459-catalog-content\") pod \"certified-operators-w6lhs\" (UID: \"8bbe4327-94ac-433e-ab44-a682c0472459\") " pod="openshift-marketplace/certified-operators-w6lhs" Dec 05 19:40:32 crc kubenswrapper[4828]: I1205 19:40:32.907937 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzf9m\" (UniqueName: \"kubernetes.io/projected/8bbe4327-94ac-433e-ab44-a682c0472459-kube-api-access-hzf9m\") pod \"certified-operators-w6lhs\" (UID: \"8bbe4327-94ac-433e-ab44-a682c0472459\") " pod="openshift-marketplace/certified-operators-w6lhs" Dec 05 19:40:33 crc kubenswrapper[4828]: I1205 19:40:33.018384 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6lhs" Dec 05 19:40:33 crc kubenswrapper[4828]: I1205 19:40:33.303190 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w6lhs"] Dec 05 19:40:33 crc kubenswrapper[4828]: I1205 19:40:33.399516 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57rf6" event={"ID":"0ce437eb-13b3-49a9-adcf-874e3e672a8c","Type":"ContainerStarted","Data":"0a29d0d5d087761c28d2d07cb8c9d6c27045a8563fa6d9b381656acaf5fa4ea7"} Dec 05 19:40:33 crc kubenswrapper[4828]: I1205 19:40:33.400483 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6lhs" event={"ID":"8bbe4327-94ac-433e-ab44-a682c0472459","Type":"ContainerStarted","Data":"50f9bc868e8127bd07d9ceeb4f92fd7a675b8a0cfd854967596c5125933f6c1a"} Dec 05 19:40:34 crc kubenswrapper[4828]: I1205 19:40:34.431250 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57rf6" event={"ID":"0ce437eb-13b3-49a9-adcf-874e3e672a8c","Type":"ContainerStarted","Data":"76bb673d689a1d456d794faf45720505043fc60e389db47cd39c11354a4b265b"} Dec 05 19:40:34 crc kubenswrapper[4828]: I1205 19:40:34.435345 4828 generic.go:334] "Generic (PLEG): container finished" podID="8bbe4327-94ac-433e-ab44-a682c0472459" containerID="b5a05069fa541b461edf09601e77f3be912291e803e6b267859825e3b9ab9ea6" exitCode=0 Dec 05 19:40:34 crc kubenswrapper[4828]: I1205 19:40:34.435388 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6lhs" event={"ID":"8bbe4327-94ac-433e-ab44-a682c0472459","Type":"ContainerDied","Data":"b5a05069fa541b461edf09601e77f3be912291e803e6b267859825e3b9ab9ea6"} Dec 05 19:40:34 crc kubenswrapper[4828]: I1205 19:40:34.455437 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57rf6" podStartSLOduration=2.527996496 podStartE2EDuration="3.455417519s" podCreationTimestamp="2025-12-05 19:40:31 +0000 UTC" firstStartedPulling="2025-12-05 19:40:32.580014736 +0000 UTC m=+2210.475237042" lastFinishedPulling="2025-12-05 19:40:33.507435759 +0000 UTC m=+2211.402658065" observedRunningTime="2025-12-05 19:40:34.454884344 +0000 UTC m=+2212.350106720" watchObservedRunningTime="2025-12-05 19:40:34.455417519 +0000 UTC m=+2212.350639835" Dec 05 19:40:35 crc kubenswrapper[4828]: I1205 19:40:35.140177 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jb7vj" Dec 05 19:40:35 crc kubenswrapper[4828]: I1205 19:40:35.140464 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jb7vj" Dec 05 19:40:35 crc kubenswrapper[4828]: I1205 19:40:35.194070 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jb7vj" Dec 05 19:40:35 crc kubenswrapper[4828]: I1205 19:40:35.259427 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:40:35 crc kubenswrapper[4828]: I1205 19:40:35.259507 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:40:35 crc kubenswrapper[4828]: I1205 19:40:35.446375 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6lhs" event={"ID":"8bbe4327-94ac-433e-ab44-a682c0472459","Type":"ContainerStarted","Data":"49d3b7a65510ffea445b9a7b662fd18d87af556e63d2999aada8072d462a8631"} Dec 05 19:40:35 crc kubenswrapper[4828]: I1205 19:40:35.522979 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jb7vj" Dec 05 19:40:37 crc kubenswrapper[4828]: I1205 19:40:37.468363 4828 generic.go:334] "Generic (PLEG): container finished" podID="8bbe4327-94ac-433e-ab44-a682c0472459" containerID="49d3b7a65510ffea445b9a7b662fd18d87af556e63d2999aada8072d462a8631" exitCode=0 Dec 05 19:40:37 crc kubenswrapper[4828]: I1205 19:40:37.468528 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6lhs" event={"ID":"8bbe4327-94ac-433e-ab44-a682c0472459","Type":"ContainerDied","Data":"49d3b7a65510ffea445b9a7b662fd18d87af556e63d2999aada8072d462a8631"} Dec 05 19:40:37 crc kubenswrapper[4828]: I1205 19:40:37.597370 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jb7vj"] Dec 05 19:40:37 crc kubenswrapper[4828]: I1205 19:40:37.597768 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jb7vj" podUID="f125710c-805e-4584-94a8-811943d6907c" containerName="registry-server" containerID="cri-o://f5e9e1da72352c1472c5b80fe0891046e719d3b7b96eab7a544b551f9cdf339c" gracePeriod=2 Dec 05 19:40:38 crc kubenswrapper[4828]: I1205 19:40:38.097453 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jb7vj" Dec 05 19:40:38 crc kubenswrapper[4828]: I1205 19:40:38.204214 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f125710c-805e-4584-94a8-811943d6907c-catalog-content\") pod \"f125710c-805e-4584-94a8-811943d6907c\" (UID: \"f125710c-805e-4584-94a8-811943d6907c\") " Dec 05 19:40:38 crc kubenswrapper[4828]: I1205 19:40:38.204659 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f125710c-805e-4584-94a8-811943d6907c-utilities\") pod \"f125710c-805e-4584-94a8-811943d6907c\" (UID: \"f125710c-805e-4584-94a8-811943d6907c\") " Dec 05 19:40:38 crc kubenswrapper[4828]: I1205 19:40:38.204706 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xp566\" (UniqueName: \"kubernetes.io/projected/f125710c-805e-4584-94a8-811943d6907c-kube-api-access-xp566\") pod \"f125710c-805e-4584-94a8-811943d6907c\" (UID: \"f125710c-805e-4584-94a8-811943d6907c\") " Dec 05 19:40:38 crc kubenswrapper[4828]: I1205 19:40:38.206036 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f125710c-805e-4584-94a8-811943d6907c-utilities" (OuterVolumeSpecName: "utilities") pod "f125710c-805e-4584-94a8-811943d6907c" (UID: "f125710c-805e-4584-94a8-811943d6907c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:40:38 crc kubenswrapper[4828]: I1205 19:40:38.219617 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f125710c-805e-4584-94a8-811943d6907c-kube-api-access-xp566" (OuterVolumeSpecName: "kube-api-access-xp566") pod "f125710c-805e-4584-94a8-811943d6907c" (UID: "f125710c-805e-4584-94a8-811943d6907c"). InnerVolumeSpecName "kube-api-access-xp566". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:40:38 crc kubenswrapper[4828]: I1205 19:40:38.264172 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f125710c-805e-4584-94a8-811943d6907c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f125710c-805e-4584-94a8-811943d6907c" (UID: "f125710c-805e-4584-94a8-811943d6907c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:40:38 crc kubenswrapper[4828]: I1205 19:40:38.306553 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f125710c-805e-4584-94a8-811943d6907c-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 19:40:38 crc kubenswrapper[4828]: I1205 19:40:38.306589 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xp566\" (UniqueName: \"kubernetes.io/projected/f125710c-805e-4584-94a8-811943d6907c-kube-api-access-xp566\") on node \"crc\" DevicePath \"\"" Dec 05 19:40:38 crc kubenswrapper[4828]: I1205 19:40:38.306599 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f125710c-805e-4584-94a8-811943d6907c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 19:40:38 crc kubenswrapper[4828]: I1205 19:40:38.482947 4828 generic.go:334] "Generic (PLEG): container finished" podID="f125710c-805e-4584-94a8-811943d6907c" containerID="f5e9e1da72352c1472c5b80fe0891046e719d3b7b96eab7a544b551f9cdf339c" exitCode=0 Dec 05 19:40:38 crc kubenswrapper[4828]: I1205 19:40:38.483010 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jb7vj" Dec 05 19:40:38 crc kubenswrapper[4828]: I1205 19:40:38.483037 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jb7vj" event={"ID":"f125710c-805e-4584-94a8-811943d6907c","Type":"ContainerDied","Data":"f5e9e1da72352c1472c5b80fe0891046e719d3b7b96eab7a544b551f9cdf339c"} Dec 05 19:40:38 crc kubenswrapper[4828]: I1205 19:40:38.483069 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jb7vj" event={"ID":"f125710c-805e-4584-94a8-811943d6907c","Type":"ContainerDied","Data":"e336e1cc550b110d4dafd5314f808439fc216a50d4217f42ad72bc4a77e99380"} Dec 05 19:40:38 crc kubenswrapper[4828]: I1205 19:40:38.483087 4828 scope.go:117] "RemoveContainer" containerID="f5e9e1da72352c1472c5b80fe0891046e719d3b7b96eab7a544b551f9cdf339c" Dec 05 19:40:38 crc kubenswrapper[4828]: I1205 19:40:38.488369 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6lhs" event={"ID":"8bbe4327-94ac-433e-ab44-a682c0472459","Type":"ContainerStarted","Data":"2e76c9888165e1b800d76ef1b37405479c83ae98d4c6cbc06211860949ba8169"} Dec 05 19:40:38 crc kubenswrapper[4828]: I1205 19:40:38.519444 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jb7vj"] Dec 05 19:40:38 crc kubenswrapper[4828]: I1205 19:40:38.529028 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jb7vj"] Dec 05 19:40:38 crc kubenswrapper[4828]: I1205 19:40:38.533257 4828 scope.go:117] "RemoveContainer" containerID="05138fc92804929dd60d74566691693e43dfb1c8db628637cd8ee3855feb7485" Dec 05 19:40:38 crc kubenswrapper[4828]: I1205 19:40:38.543013 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w6lhs" podStartSLOduration=3.085399855 podStartE2EDuration="6.542995158s" podCreationTimestamp="2025-12-05 19:40:32 +0000 UTC" firstStartedPulling="2025-12-05 19:40:34.436966948 +0000 UTC m=+2212.332189274" lastFinishedPulling="2025-12-05 19:40:37.894562271 +0000 UTC m=+2215.789784577" observedRunningTime="2025-12-05 19:40:38.539470463 +0000 UTC m=+2216.434692769" watchObservedRunningTime="2025-12-05 19:40:38.542995158 +0000 UTC m=+2216.438217464" Dec 05 19:40:38 crc kubenswrapper[4828]: I1205 19:40:38.556971 4828 scope.go:117] "RemoveContainer" containerID="3b90e2354b88b7b22481143035ac7e8fdc25f5685bea076b1fcd3828b9073762" Dec 05 19:40:38 crc kubenswrapper[4828]: I1205 19:40:38.616124 4828 scope.go:117] "RemoveContainer" containerID="f5e9e1da72352c1472c5b80fe0891046e719d3b7b96eab7a544b551f9cdf339c" Dec 05 19:40:38 crc kubenswrapper[4828]: E1205 19:40:38.616719 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5e9e1da72352c1472c5b80fe0891046e719d3b7b96eab7a544b551f9cdf339c\": container with ID starting with f5e9e1da72352c1472c5b80fe0891046e719d3b7b96eab7a544b551f9cdf339c not found: ID does not exist" containerID="f5e9e1da72352c1472c5b80fe0891046e719d3b7b96eab7a544b551f9cdf339c" Dec 05 19:40:38 crc kubenswrapper[4828]: I1205 19:40:38.616767 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5e9e1da72352c1472c5b80fe0891046e719d3b7b96eab7a544b551f9cdf339c"} err="failed to get container status \"f5e9e1da72352c1472c5b80fe0891046e719d3b7b96eab7a544b551f9cdf339c\": rpc error: code = NotFound desc = could not find container \"f5e9e1da72352c1472c5b80fe0891046e719d3b7b96eab7a544b551f9cdf339c\": container with ID starting with f5e9e1da72352c1472c5b80fe0891046e719d3b7b96eab7a544b551f9cdf339c not found: ID does not exist" Dec 05 19:40:38 crc kubenswrapper[4828]: I1205 19:40:38.616794 4828 scope.go:117] "RemoveContainer" containerID="05138fc92804929dd60d74566691693e43dfb1c8db628637cd8ee3855feb7485" Dec 05 19:40:38 crc kubenswrapper[4828]: E1205 19:40:38.617288 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05138fc92804929dd60d74566691693e43dfb1c8db628637cd8ee3855feb7485\": container with ID starting with 05138fc92804929dd60d74566691693e43dfb1c8db628637cd8ee3855feb7485 not found: ID does not exist" containerID="05138fc92804929dd60d74566691693e43dfb1c8db628637cd8ee3855feb7485" Dec 05 19:40:38 crc kubenswrapper[4828]: I1205 19:40:38.617314 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05138fc92804929dd60d74566691693e43dfb1c8db628637cd8ee3855feb7485"} err="failed to get container status \"05138fc92804929dd60d74566691693e43dfb1c8db628637cd8ee3855feb7485\": rpc error: code = NotFound desc = could not find container \"05138fc92804929dd60d74566691693e43dfb1c8db628637cd8ee3855feb7485\": container with ID starting with 05138fc92804929dd60d74566691693e43dfb1c8db628637cd8ee3855feb7485 not found: ID does not exist" Dec 05 19:40:38 crc kubenswrapper[4828]: I1205 19:40:38.617332 4828 scope.go:117] "RemoveContainer" containerID="3b90e2354b88b7b22481143035ac7e8fdc25f5685bea076b1fcd3828b9073762" Dec 05 19:40:38 crc kubenswrapper[4828]: E1205 19:40:38.617797 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b90e2354b88b7b22481143035ac7e8fdc25f5685bea076b1fcd3828b9073762\": container with ID starting with 3b90e2354b88b7b22481143035ac7e8fdc25f5685bea076b1fcd3828b9073762 not found: ID does not exist" containerID="3b90e2354b88b7b22481143035ac7e8fdc25f5685bea076b1fcd3828b9073762" Dec 05 19:40:38 crc kubenswrapper[4828]: I1205 19:40:38.617872 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b90e2354b88b7b22481143035ac7e8fdc25f5685bea076b1fcd3828b9073762"} err="failed to get container status \"3b90e2354b88b7b22481143035ac7e8fdc25f5685bea076b1fcd3828b9073762\": rpc error: code = NotFound desc = could not find container \"3b90e2354b88b7b22481143035ac7e8fdc25f5685bea076b1fcd3828b9073762\": container with ID starting with 3b90e2354b88b7b22481143035ac7e8fdc25f5685bea076b1fcd3828b9073762 not found: ID does not exist" Dec 05 19:40:40 crc kubenswrapper[4828]: I1205 19:40:40.462451 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f125710c-805e-4584-94a8-811943d6907c" path="/var/lib/kubelet/pods/f125710c-805e-4584-94a8-811943d6907c/volumes" Dec 05 19:40:43 crc kubenswrapper[4828]: I1205 19:40:43.019590 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w6lhs" Dec 05 19:40:43 crc kubenswrapper[4828]: I1205 19:40:43.021947 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w6lhs" Dec 05 19:40:43 crc kubenswrapper[4828]: I1205 19:40:43.075094 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w6lhs" Dec 05 19:40:43 crc kubenswrapper[4828]: I1205 19:40:43.586791 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w6lhs" Dec 05 19:40:43 crc kubenswrapper[4828]: I1205 19:40:43.633269 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w6lhs"] Dec 05 19:40:45 crc kubenswrapper[4828]: I1205 19:40:45.446885 4828 scope.go:117] "RemoveContainer" containerID="38e4dac2d6a881889bb348b79d1e1f1c0a83324dc2bf7d6fb1d1128a0cd7ea6d" Dec 05 19:40:45 crc kubenswrapper[4828]: E1205 19:40:45.448021 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:40:45 crc kubenswrapper[4828]: I1205 19:40:45.556953 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w6lhs" podUID="8bbe4327-94ac-433e-ab44-a682c0472459" containerName="registry-server" containerID="cri-o://2e76c9888165e1b800d76ef1b37405479c83ae98d4c6cbc06211860949ba8169" gracePeriod=2 Dec 05 19:40:46 crc kubenswrapper[4828]: I1205 19:40:46.023119 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6lhs" Dec 05 19:40:46 crc kubenswrapper[4828]: I1205 19:40:46.164006 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bbe4327-94ac-433e-ab44-a682c0472459-catalog-content\") pod \"8bbe4327-94ac-433e-ab44-a682c0472459\" (UID: \"8bbe4327-94ac-433e-ab44-a682c0472459\") " Dec 05 19:40:46 crc kubenswrapper[4828]: I1205 19:40:46.164159 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzf9m\" (UniqueName: \"kubernetes.io/projected/8bbe4327-94ac-433e-ab44-a682c0472459-kube-api-access-hzf9m\") pod \"8bbe4327-94ac-433e-ab44-a682c0472459\" (UID: \"8bbe4327-94ac-433e-ab44-a682c0472459\") " Dec 05 19:40:46 crc kubenswrapper[4828]: I1205 19:40:46.164322 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bbe4327-94ac-433e-ab44-a682c0472459-utilities\") pod \"8bbe4327-94ac-433e-ab44-a682c0472459\" (UID: \"8bbe4327-94ac-433e-ab44-a682c0472459\") " Dec 05 19:40:46 crc kubenswrapper[4828]: I1205 19:40:46.165279 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bbe4327-94ac-433e-ab44-a682c0472459-utilities" (OuterVolumeSpecName: "utilities") pod "8bbe4327-94ac-433e-ab44-a682c0472459" (UID: "8bbe4327-94ac-433e-ab44-a682c0472459"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:40:46 crc kubenswrapper[4828]: I1205 19:40:46.170510 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bbe4327-94ac-433e-ab44-a682c0472459-kube-api-access-hzf9m" (OuterVolumeSpecName: "kube-api-access-hzf9m") pod "8bbe4327-94ac-433e-ab44-a682c0472459" (UID: "8bbe4327-94ac-433e-ab44-a682c0472459"). InnerVolumeSpecName "kube-api-access-hzf9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:40:46 crc kubenswrapper[4828]: I1205 19:40:46.219882 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bbe4327-94ac-433e-ab44-a682c0472459-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8bbe4327-94ac-433e-ab44-a682c0472459" (UID: "8bbe4327-94ac-433e-ab44-a682c0472459"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:40:46 crc kubenswrapper[4828]: I1205 19:40:46.266904 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bbe4327-94ac-433e-ab44-a682c0472459-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 19:40:46 crc kubenswrapper[4828]: I1205 19:40:46.266935 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bbe4327-94ac-433e-ab44-a682c0472459-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 19:40:46 crc kubenswrapper[4828]: I1205 19:40:46.266947 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzf9m\" (UniqueName: \"kubernetes.io/projected/8bbe4327-94ac-433e-ab44-a682c0472459-kube-api-access-hzf9m\") on node \"crc\" DevicePath \"\"" Dec 05 19:40:46 crc kubenswrapper[4828]: I1205 19:40:46.569019 4828 generic.go:334] "Generic (PLEG): container finished" podID="8bbe4327-94ac-433e-ab44-a682c0472459" containerID="2e76c9888165e1b800d76ef1b37405479c83ae98d4c6cbc06211860949ba8169" exitCode=0 Dec 05 19:40:46 crc kubenswrapper[4828]: I1205 19:40:46.569206 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6lhs" event={"ID":"8bbe4327-94ac-433e-ab44-a682c0472459","Type":"ContainerDied","Data":"2e76c9888165e1b800d76ef1b37405479c83ae98d4c6cbc06211860949ba8169"} Dec 05 19:40:46 crc kubenswrapper[4828]: I1205 19:40:46.569330 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6lhs" event={"ID":"8bbe4327-94ac-433e-ab44-a682c0472459","Type":"ContainerDied","Data":"50f9bc868e8127bd07d9ceeb4f92fd7a675b8a0cfd854967596c5125933f6c1a"} Dec 05 19:40:46 crc kubenswrapper[4828]: I1205 19:40:46.569284 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6lhs" Dec 05 19:40:46 crc kubenswrapper[4828]: I1205 19:40:46.569386 4828 scope.go:117] "RemoveContainer" containerID="2e76c9888165e1b800d76ef1b37405479c83ae98d4c6cbc06211860949ba8169" Dec 05 19:40:46 crc kubenswrapper[4828]: I1205 19:40:46.595763 4828 scope.go:117] "RemoveContainer" containerID="49d3b7a65510ffea445b9a7b662fd18d87af556e63d2999aada8072d462a8631" Dec 05 19:40:46 crc kubenswrapper[4828]: I1205 19:40:46.598265 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w6lhs"] Dec 05 19:40:46 crc kubenswrapper[4828]: I1205 19:40:46.608648 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w6lhs"] Dec 05 19:40:46 crc kubenswrapper[4828]: I1205 19:40:46.617856 4828 scope.go:117] "RemoveContainer" containerID="b5a05069fa541b461edf09601e77f3be912291e803e6b267859825e3b9ab9ea6" Dec 05 19:40:46 crc kubenswrapper[4828]: I1205 19:40:46.655486 4828 scope.go:117] "RemoveContainer" containerID="2e76c9888165e1b800d76ef1b37405479c83ae98d4c6cbc06211860949ba8169" Dec 05 19:40:46 crc kubenswrapper[4828]: E1205 19:40:46.656155 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e76c9888165e1b800d76ef1b37405479c83ae98d4c6cbc06211860949ba8169\": container with ID starting with 2e76c9888165e1b800d76ef1b37405479c83ae98d4c6cbc06211860949ba8169 not found: ID does not exist" containerID="2e76c9888165e1b800d76ef1b37405479c83ae98d4c6cbc06211860949ba8169" Dec 05 19:40:46 crc kubenswrapper[4828]: I1205 19:40:46.656213 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e76c9888165e1b800d76ef1b37405479c83ae98d4c6cbc06211860949ba8169"} err="failed to get container status \"2e76c9888165e1b800d76ef1b37405479c83ae98d4c6cbc06211860949ba8169\": rpc error: code = NotFound desc = could not find container \"2e76c9888165e1b800d76ef1b37405479c83ae98d4c6cbc06211860949ba8169\": container with ID starting with 2e76c9888165e1b800d76ef1b37405479c83ae98d4c6cbc06211860949ba8169 not found: ID does not exist" Dec 05 19:40:46 crc kubenswrapper[4828]: I1205 19:40:46.656248 4828 scope.go:117] "RemoveContainer" containerID="49d3b7a65510ffea445b9a7b662fd18d87af556e63d2999aada8072d462a8631" Dec 05 19:40:46 crc kubenswrapper[4828]: E1205 19:40:46.656658 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49d3b7a65510ffea445b9a7b662fd18d87af556e63d2999aada8072d462a8631\": container with ID starting with 49d3b7a65510ffea445b9a7b662fd18d87af556e63d2999aada8072d462a8631 not found: ID does not exist" containerID="49d3b7a65510ffea445b9a7b662fd18d87af556e63d2999aada8072d462a8631" Dec 05 19:40:46 crc kubenswrapper[4828]: I1205 19:40:46.656691 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49d3b7a65510ffea445b9a7b662fd18d87af556e63d2999aada8072d462a8631"} err="failed to get container status \"49d3b7a65510ffea445b9a7b662fd18d87af556e63d2999aada8072d462a8631\": rpc error: code = NotFound desc = could not find container \"49d3b7a65510ffea445b9a7b662fd18d87af556e63d2999aada8072d462a8631\": container with ID starting with 49d3b7a65510ffea445b9a7b662fd18d87af556e63d2999aada8072d462a8631 not found: ID does not exist" Dec 05 19:40:46 crc kubenswrapper[4828]: I1205 19:40:46.656713 4828 scope.go:117] "RemoveContainer" containerID="b5a05069fa541b461edf09601e77f3be912291e803e6b267859825e3b9ab9ea6" Dec 05 19:40:46 crc kubenswrapper[4828]: E1205 19:40:46.656980 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5a05069fa541b461edf09601e77f3be912291e803e6b267859825e3b9ab9ea6\": container with ID starting with b5a05069fa541b461edf09601e77f3be912291e803e6b267859825e3b9ab9ea6 not found: ID does not exist" containerID="b5a05069fa541b461edf09601e77f3be912291e803e6b267859825e3b9ab9ea6" Dec 05 19:40:46 crc kubenswrapper[4828]: I1205 19:40:46.657008 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5a05069fa541b461edf09601e77f3be912291e803e6b267859825e3b9ab9ea6"} err="failed to get container status \"b5a05069fa541b461edf09601e77f3be912291e803e6b267859825e3b9ab9ea6\": rpc error: code = NotFound desc = could not find container \"b5a05069fa541b461edf09601e77f3be912291e803e6b267859825e3b9ab9ea6\": container with ID starting with b5a05069fa541b461edf09601e77f3be912291e803e6b267859825e3b9ab9ea6 not found: ID does not exist" Dec 05 19:40:48 crc kubenswrapper[4828]: I1205 19:40:48.481315 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bbe4327-94ac-433e-ab44-a682c0472459" path="/var/lib/kubelet/pods/8bbe4327-94ac-433e-ab44-a682c0472459/volumes" Dec 05 19:40:59 crc kubenswrapper[4828]: I1205 19:40:59.446577 4828 scope.go:117] "RemoveContainer" containerID="38e4dac2d6a881889bb348b79d1e1f1c0a83324dc2bf7d6fb1d1128a0cd7ea6d" Dec 05 19:40:59 crc kubenswrapper[4828]: E1205 19:40:59.447389 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:41:05 crc kubenswrapper[4828]: I1205 19:41:05.259880 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:41:05 crc kubenswrapper[4828]: I1205 19:41:05.260293 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:41:05 crc kubenswrapper[4828]: I1205 19:41:05.260338 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" Dec 05 19:41:05 crc kubenswrapper[4828]: I1205 19:41:05.261085 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116"} pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 19:41:05 crc kubenswrapper[4828]: I1205 19:41:05.261137 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" containerID="cri-o://678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116" gracePeriod=600 Dec 05 19:41:05 crc kubenswrapper[4828]: E1205 19:41:05.417706 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:41:05 crc kubenswrapper[4828]: I1205 19:41:05.753737 4828 generic.go:334] "Generic (PLEG): container finished" podID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerID="678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116" exitCode=0 Dec 05 19:41:05 crc kubenswrapper[4828]: I1205 19:41:05.753794 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerDied","Data":"678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116"} Dec 05 19:41:05 crc kubenswrapper[4828]: I1205 19:41:05.753890 4828 scope.go:117] "RemoveContainer" containerID="8209c6da3bd70657af20f8ec92896f69eb795638d1fc585ae81fad0dcdd54f0f" Dec 05 19:41:05 crc kubenswrapper[4828]: I1205 19:41:05.754533 4828 scope.go:117] "RemoveContainer" containerID="678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116" Dec 05 19:41:05 crc kubenswrapper[4828]: E1205 19:41:05.754899 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:41:14 crc kubenswrapper[4828]: I1205 19:41:14.447265 4828 scope.go:117] "RemoveContainer" containerID="38e4dac2d6a881889bb348b79d1e1f1c0a83324dc2bf7d6fb1d1128a0cd7ea6d" Dec 05 19:41:14 crc kubenswrapper[4828]: E1205 19:41:14.448399 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:41:16 crc kubenswrapper[4828]: I1205 19:41:16.447256 4828 scope.go:117] "RemoveContainer" containerID="678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116" Dec 05 19:41:16 crc kubenswrapper[4828]: E1205 19:41:16.448144 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:41:27 crc kubenswrapper[4828]: I1205 19:41:27.447457 4828 scope.go:117] "RemoveContainer" containerID="38e4dac2d6a881889bb348b79d1e1f1c0a83324dc2bf7d6fb1d1128a0cd7ea6d" Dec 05 19:41:27 crc kubenswrapper[4828]: E1205 19:41:27.448482 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:41:30 crc kubenswrapper[4828]: I1205 19:41:30.447009 4828 scope.go:117] "RemoveContainer" containerID="678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116" Dec 05 19:41:30 crc kubenswrapper[4828]: E1205 19:41:30.447675 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:41:39 crc kubenswrapper[4828]: I1205 19:41:39.123449 4828 generic.go:334] "Generic (PLEG): container finished" podID="0ce437eb-13b3-49a9-adcf-874e3e672a8c" containerID="76bb673d689a1d456d794faf45720505043fc60e389db47cd39c11354a4b265b" exitCode=0 Dec 05 19:41:39 crc kubenswrapper[4828]: I1205 19:41:39.123516 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57rf6" event={"ID":"0ce437eb-13b3-49a9-adcf-874e3e672a8c","Type":"ContainerDied","Data":"76bb673d689a1d456d794faf45720505043fc60e389db47cd39c11354a4b265b"} Dec 05 19:41:40 crc kubenswrapper[4828]: I1205 19:41:40.559007 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57rf6" Dec 05 19:41:40 crc kubenswrapper[4828]: I1205 19:41:40.666222 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0ce437eb-13b3-49a9-adcf-874e3e672a8c-ovncontroller-config-0\") pod \"0ce437eb-13b3-49a9-adcf-874e3e672a8c\" (UID: \"0ce437eb-13b3-49a9-adcf-874e3e672a8c\") " Dec 05 19:41:40 crc kubenswrapper[4828]: I1205 19:41:40.666341 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0ce437eb-13b3-49a9-adcf-874e3e672a8c-ssh-key\") pod \"0ce437eb-13b3-49a9-adcf-874e3e672a8c\" (UID: \"0ce437eb-13b3-49a9-adcf-874e3e672a8c\") " Dec 05 19:41:40 crc kubenswrapper[4828]: I1205 19:41:40.666478 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ce437eb-13b3-49a9-adcf-874e3e672a8c-inventory\") pod \"0ce437eb-13b3-49a9-adcf-874e3e672a8c\" (UID: \"0ce437eb-13b3-49a9-adcf-874e3e672a8c\") " Dec 05 19:41:40 crc kubenswrapper[4828]: I1205 19:41:40.666526 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ce437eb-13b3-49a9-adcf-874e3e672a8c-ovn-combined-ca-bundle\") pod \"0ce437eb-13b3-49a9-adcf-874e3e672a8c\" (UID: \"0ce437eb-13b3-49a9-adcf-874e3e672a8c\") " Dec 05 19:41:40 crc kubenswrapper[4828]: I1205 19:41:40.666695 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rj5j\" (UniqueName: \"kubernetes.io/projected/0ce437eb-13b3-49a9-adcf-874e3e672a8c-kube-api-access-8rj5j\") pod \"0ce437eb-13b3-49a9-adcf-874e3e672a8c\" (UID: \"0ce437eb-13b3-49a9-adcf-874e3e672a8c\") " Dec 05 19:41:40 crc kubenswrapper[4828]: I1205 19:41:40.673532 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ce437eb-13b3-49a9-adcf-874e3e672a8c-kube-api-access-8rj5j" (OuterVolumeSpecName: "kube-api-access-8rj5j") pod "0ce437eb-13b3-49a9-adcf-874e3e672a8c" (UID: "0ce437eb-13b3-49a9-adcf-874e3e672a8c"). InnerVolumeSpecName "kube-api-access-8rj5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:41:40 crc kubenswrapper[4828]: I1205 19:41:40.673686 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ce437eb-13b3-49a9-adcf-874e3e672a8c-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "0ce437eb-13b3-49a9-adcf-874e3e672a8c" (UID: "0ce437eb-13b3-49a9-adcf-874e3e672a8c"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:41:40 crc kubenswrapper[4828]: I1205 19:41:40.696947 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ce437eb-13b3-49a9-adcf-874e3e672a8c-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "0ce437eb-13b3-49a9-adcf-874e3e672a8c" (UID: "0ce437eb-13b3-49a9-adcf-874e3e672a8c"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:41:40 crc kubenswrapper[4828]: I1205 19:41:40.726378 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ce437eb-13b3-49a9-adcf-874e3e672a8c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0ce437eb-13b3-49a9-adcf-874e3e672a8c" (UID: "0ce437eb-13b3-49a9-adcf-874e3e672a8c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:41:40 crc kubenswrapper[4828]: I1205 19:41:40.729027 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ce437eb-13b3-49a9-adcf-874e3e672a8c-inventory" (OuterVolumeSpecName: "inventory") pod "0ce437eb-13b3-49a9-adcf-874e3e672a8c" (UID: "0ce437eb-13b3-49a9-adcf-874e3e672a8c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:41:40 crc kubenswrapper[4828]: I1205 19:41:40.769126 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rj5j\" (UniqueName: \"kubernetes.io/projected/0ce437eb-13b3-49a9-adcf-874e3e672a8c-kube-api-access-8rj5j\") on node \"crc\" DevicePath \"\"" Dec 05 19:41:40 crc kubenswrapper[4828]: I1205 19:41:40.769169 4828 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0ce437eb-13b3-49a9-adcf-874e3e672a8c-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 05 19:41:40 crc kubenswrapper[4828]: I1205 19:41:40.769186 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0ce437eb-13b3-49a9-adcf-874e3e672a8c-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 19:41:40 crc kubenswrapper[4828]: I1205 19:41:40.769203 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ce437eb-13b3-49a9-adcf-874e3e672a8c-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 19:41:40 crc kubenswrapper[4828]: I1205 19:41:40.769220 4828 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ce437eb-13b3-49a9-adcf-874e3e672a8c-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.152673 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57rf6" event={"ID":"0ce437eb-13b3-49a9-adcf-874e3e672a8c","Type":"ContainerDied","Data":"0a29d0d5d087761c28d2d07cb8c9d6c27045a8563fa6d9b381656acaf5fa4ea7"} Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.153076 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a29d0d5d087761c28d2d07cb8c9d6c27045a8563fa6d9b381656acaf5fa4ea7" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.152730 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57rf6" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.257770 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2"] Dec 05 19:41:41 crc kubenswrapper[4828]: E1205 19:41:41.258239 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bbe4327-94ac-433e-ab44-a682c0472459" containerName="extract-utilities" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.258262 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bbe4327-94ac-433e-ab44-a682c0472459" containerName="extract-utilities" Dec 05 19:41:41 crc kubenswrapper[4828]: E1205 19:41:41.258281 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ce437eb-13b3-49a9-adcf-874e3e672a8c" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.258289 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ce437eb-13b3-49a9-adcf-874e3e672a8c" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 05 19:41:41 crc kubenswrapper[4828]: E1205 19:41:41.258302 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f125710c-805e-4584-94a8-811943d6907c" containerName="registry-server" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.258310 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f125710c-805e-4584-94a8-811943d6907c" containerName="registry-server" Dec 05 19:41:41 crc kubenswrapper[4828]: E1205 19:41:41.258329 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f125710c-805e-4584-94a8-811943d6907c" containerName="extract-utilities" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.258335 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f125710c-805e-4584-94a8-811943d6907c" containerName="extract-utilities" Dec 05 19:41:41 crc kubenswrapper[4828]: E1205 19:41:41.258364 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bbe4327-94ac-433e-ab44-a682c0472459" containerName="extract-content" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.258371 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bbe4327-94ac-433e-ab44-a682c0472459" containerName="extract-content" Dec 05 19:41:41 crc kubenswrapper[4828]: E1205 19:41:41.258382 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bbe4327-94ac-433e-ab44-a682c0472459" containerName="registry-server" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.258389 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bbe4327-94ac-433e-ab44-a682c0472459" containerName="registry-server" Dec 05 19:41:41 crc kubenswrapper[4828]: E1205 19:41:41.258402 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f125710c-805e-4584-94a8-811943d6907c" containerName="extract-content" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.258408 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="f125710c-805e-4584-94a8-811943d6907c" containerName="extract-content" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.258624 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ce437eb-13b3-49a9-adcf-874e3e672a8c" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.258645 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bbe4327-94ac-433e-ab44-a682c0472459" containerName="registry-server" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.258665 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="f125710c-805e-4584-94a8-811943d6907c" containerName="registry-server" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.259447 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.265379 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.265410 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.265617 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.265669 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.265787 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9rkjj" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.265867 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.276662 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2"] Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.378377 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/02cb0b69-3011-491e-8081-0ee1a0053610-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2\" (UID: \"02cb0b69-3011-491e-8081-0ee1a0053610\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.378471 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02cb0b69-3011-491e-8081-0ee1a0053610-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2\" (UID: \"02cb0b69-3011-491e-8081-0ee1a0053610\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.378683 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/02cb0b69-3011-491e-8081-0ee1a0053610-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2\" (UID: \"02cb0b69-3011-491e-8081-0ee1a0053610\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.378971 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwcgf\" (UniqueName: \"kubernetes.io/projected/02cb0b69-3011-491e-8081-0ee1a0053610-kube-api-access-zwcgf\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2\" (UID: \"02cb0b69-3011-491e-8081-0ee1a0053610\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.379026 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02cb0b69-3011-491e-8081-0ee1a0053610-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2\" (UID: \"02cb0b69-3011-491e-8081-0ee1a0053610\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.379147 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/02cb0b69-3011-491e-8081-0ee1a0053610-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2\" (UID: \"02cb0b69-3011-491e-8081-0ee1a0053610\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.447346 4828 scope.go:117] "RemoveContainer" containerID="38e4dac2d6a881889bb348b79d1e1f1c0a83324dc2bf7d6fb1d1128a0cd7ea6d" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.480418 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwcgf\" (UniqueName: \"kubernetes.io/projected/02cb0b69-3011-491e-8081-0ee1a0053610-kube-api-access-zwcgf\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2\" (UID: \"02cb0b69-3011-491e-8081-0ee1a0053610\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.480470 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02cb0b69-3011-491e-8081-0ee1a0053610-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2\" (UID: \"02cb0b69-3011-491e-8081-0ee1a0053610\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.480531 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/02cb0b69-3011-491e-8081-0ee1a0053610-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2\" (UID: \"02cb0b69-3011-491e-8081-0ee1a0053610\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.480643 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/02cb0b69-3011-491e-8081-0ee1a0053610-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2\" (UID: \"02cb0b69-3011-491e-8081-0ee1a0053610\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.480675 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02cb0b69-3011-491e-8081-0ee1a0053610-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2\" (UID: \"02cb0b69-3011-491e-8081-0ee1a0053610\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.480708 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/02cb0b69-3011-491e-8081-0ee1a0053610-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2\" (UID: \"02cb0b69-3011-491e-8081-0ee1a0053610\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.486600 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/02cb0b69-3011-491e-8081-0ee1a0053610-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2\" (UID: \"02cb0b69-3011-491e-8081-0ee1a0053610\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.486689 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/02cb0b69-3011-491e-8081-0ee1a0053610-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2\" (UID: \"02cb0b69-3011-491e-8081-0ee1a0053610\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.488798 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02cb0b69-3011-491e-8081-0ee1a0053610-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2\" (UID: \"02cb0b69-3011-491e-8081-0ee1a0053610\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.489352 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/02cb0b69-3011-491e-8081-0ee1a0053610-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2\" (UID: \"02cb0b69-3011-491e-8081-0ee1a0053610\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.494488 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02cb0b69-3011-491e-8081-0ee1a0053610-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2\" (UID: \"02cb0b69-3011-491e-8081-0ee1a0053610\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.500612 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwcgf\" (UniqueName: \"kubernetes.io/projected/02cb0b69-3011-491e-8081-0ee1a0053610-kube-api-access-zwcgf\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2\" (UID: \"02cb0b69-3011-491e-8081-0ee1a0053610\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2" Dec 05 19:41:41 crc kubenswrapper[4828]: I1205 19:41:41.589273 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2" Dec 05 19:41:42 crc kubenswrapper[4828]: I1205 19:41:42.126107 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2"] Dec 05 19:41:42 crc kubenswrapper[4828]: I1205 19:41:42.132474 4828 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 19:41:42 crc kubenswrapper[4828]: I1205 19:41:42.164646 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" event={"ID":"03c4fc5d-6be1-47b4-9c39-7bb86046dafd","Type":"ContainerStarted","Data":"a779f86b0916f1e6e49b4ba52144379301c8e78e68e352632dc079017c30d154"} Dec 05 19:41:42 crc kubenswrapper[4828]: I1205 19:41:42.164938 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:41:42 crc kubenswrapper[4828]: I1205 19:41:42.165944 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2" event={"ID":"02cb0b69-3011-491e-8081-0ee1a0053610","Type":"ContainerStarted","Data":"1a05e57714d02027c017bde692ebfc55131d85b494fc6594f26dbac37e02be4b"} Dec 05 19:41:42 crc kubenswrapper[4828]: I1205 19:41:42.592437 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 19:41:43 crc kubenswrapper[4828]: I1205 19:41:43.175554 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2" event={"ID":"02cb0b69-3011-491e-8081-0ee1a0053610","Type":"ContainerStarted","Data":"1fcab4f8826bb29eee8cf7e4d4bd166ea9a50f44f9ee48599ed85b956af5c9a5"} Dec 05 19:41:43 crc kubenswrapper[4828]: I1205 19:41:43.202094 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2" podStartSLOduration=1.744178539 podStartE2EDuration="2.202066842s" podCreationTimestamp="2025-12-05 19:41:41 +0000 UTC" firstStartedPulling="2025-12-05 19:41:42.132193472 +0000 UTC m=+2280.027415778" lastFinishedPulling="2025-12-05 19:41:42.590081775 +0000 UTC m=+2280.485304081" observedRunningTime="2025-12-05 19:41:43.19646224 +0000 UTC m=+2281.091684546" watchObservedRunningTime="2025-12-05 19:41:43.202066842 +0000 UTC m=+2281.097289158" Dec 05 19:41:45 crc kubenswrapper[4828]: I1205 19:41:45.447022 4828 scope.go:117] "RemoveContainer" containerID="678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116" Dec 05 19:41:45 crc kubenswrapper[4828]: E1205 19:41:45.447973 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:41:55 crc kubenswrapper[4828]: I1205 19:41:55.128539 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:41:58 crc kubenswrapper[4828]: I1205 19:41:58.446173 4828 scope.go:117] "RemoveContainer" containerID="678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116" Dec 05 19:41:58 crc kubenswrapper[4828]: E1205 19:41:58.446705 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:42:09 crc kubenswrapper[4828]: I1205 19:42:09.446354 4828 scope.go:117] "RemoveContainer" containerID="678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116" Dec 05 19:42:09 crc kubenswrapper[4828]: E1205 19:42:09.447240 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:42:22 crc kubenswrapper[4828]: I1205 19:42:22.453211 4828 scope.go:117] "RemoveContainer" containerID="678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116" Dec 05 19:42:22 crc kubenswrapper[4828]: E1205 19:42:22.454047 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:42:31 crc kubenswrapper[4828]: I1205 19:42:31.691576 4828 generic.go:334] "Generic (PLEG): container finished" podID="02cb0b69-3011-491e-8081-0ee1a0053610" containerID="1fcab4f8826bb29eee8cf7e4d4bd166ea9a50f44f9ee48599ed85b956af5c9a5" exitCode=0 Dec 05 19:42:31 crc kubenswrapper[4828]: I1205 19:42:31.691717 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2" event={"ID":"02cb0b69-3011-491e-8081-0ee1a0053610","Type":"ContainerDied","Data":"1fcab4f8826bb29eee8cf7e4d4bd166ea9a50f44f9ee48599ed85b956af5c9a5"} Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.106169 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2" Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.185453 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwcgf\" (UniqueName: \"kubernetes.io/projected/02cb0b69-3011-491e-8081-0ee1a0053610-kube-api-access-zwcgf\") pod \"02cb0b69-3011-491e-8081-0ee1a0053610\" (UID: \"02cb0b69-3011-491e-8081-0ee1a0053610\") " Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.185532 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/02cb0b69-3011-491e-8081-0ee1a0053610-nova-metadata-neutron-config-0\") pod \"02cb0b69-3011-491e-8081-0ee1a0053610\" (UID: \"02cb0b69-3011-491e-8081-0ee1a0053610\") " Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.185578 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02cb0b69-3011-491e-8081-0ee1a0053610-neutron-metadata-combined-ca-bundle\") pod \"02cb0b69-3011-491e-8081-0ee1a0053610\" (UID: \"02cb0b69-3011-491e-8081-0ee1a0053610\") " Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.185607 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/02cb0b69-3011-491e-8081-0ee1a0053610-ssh-key\") pod \"02cb0b69-3011-491e-8081-0ee1a0053610\" (UID: \"02cb0b69-3011-491e-8081-0ee1a0053610\") " Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.185624 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02cb0b69-3011-491e-8081-0ee1a0053610-inventory\") pod \"02cb0b69-3011-491e-8081-0ee1a0053610\" (UID: \"02cb0b69-3011-491e-8081-0ee1a0053610\") " Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.185730 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/02cb0b69-3011-491e-8081-0ee1a0053610-neutron-ovn-metadata-agent-neutron-config-0\") pod \"02cb0b69-3011-491e-8081-0ee1a0053610\" (UID: \"02cb0b69-3011-491e-8081-0ee1a0053610\") " Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.192589 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02cb0b69-3011-491e-8081-0ee1a0053610-kube-api-access-zwcgf" (OuterVolumeSpecName: "kube-api-access-zwcgf") pod "02cb0b69-3011-491e-8081-0ee1a0053610" (UID: "02cb0b69-3011-491e-8081-0ee1a0053610"). InnerVolumeSpecName "kube-api-access-zwcgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.192952 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02cb0b69-3011-491e-8081-0ee1a0053610-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "02cb0b69-3011-491e-8081-0ee1a0053610" (UID: "02cb0b69-3011-491e-8081-0ee1a0053610"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.215101 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02cb0b69-3011-491e-8081-0ee1a0053610-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "02cb0b69-3011-491e-8081-0ee1a0053610" (UID: "02cb0b69-3011-491e-8081-0ee1a0053610"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.220494 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02cb0b69-3011-491e-8081-0ee1a0053610-inventory" (OuterVolumeSpecName: "inventory") pod "02cb0b69-3011-491e-8081-0ee1a0053610" (UID: "02cb0b69-3011-491e-8081-0ee1a0053610"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.222725 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02cb0b69-3011-491e-8081-0ee1a0053610-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "02cb0b69-3011-491e-8081-0ee1a0053610" (UID: "02cb0b69-3011-491e-8081-0ee1a0053610"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.231952 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02cb0b69-3011-491e-8081-0ee1a0053610-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "02cb0b69-3011-491e-8081-0ee1a0053610" (UID: "02cb0b69-3011-491e-8081-0ee1a0053610"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.287479 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwcgf\" (UniqueName: \"kubernetes.io/projected/02cb0b69-3011-491e-8081-0ee1a0053610-kube-api-access-zwcgf\") on node \"crc\" DevicePath \"\"" Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.287524 4828 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/02cb0b69-3011-491e-8081-0ee1a0053610-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.287538 4828 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02cb0b69-3011-491e-8081-0ee1a0053610-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.287551 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/02cb0b69-3011-491e-8081-0ee1a0053610-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.287564 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02cb0b69-3011-491e-8081-0ee1a0053610-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.287577 4828 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/02cb0b69-3011-491e-8081-0ee1a0053610-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.717972 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2" event={"ID":"02cb0b69-3011-491e-8081-0ee1a0053610","Type":"ContainerDied","Data":"1a05e57714d02027c017bde692ebfc55131d85b494fc6594f26dbac37e02be4b"} Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.718019 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a05e57714d02027c017bde692ebfc55131d85b494fc6594f26dbac37e02be4b" Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.718027 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2" Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.808948 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5"] Dec 05 19:42:33 crc kubenswrapper[4828]: E1205 19:42:33.809328 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02cb0b69-3011-491e-8081-0ee1a0053610" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.809345 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="02cb0b69-3011-491e-8081-0ee1a0053610" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.809549 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="02cb0b69-3011-491e-8081-0ee1a0053610" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.810224 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5" Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.813320 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9rkjj" Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.813323 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.813328 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.813859 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.814158 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.818536 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5"] Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.898245 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b46bef7a-7a08-49f8-a4ff-d6fae6ac588e-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5\" (UID: \"b46bef7a-7a08-49f8-a4ff-d6fae6ac588e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5" Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.898464 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b46bef7a-7a08-49f8-a4ff-d6fae6ac588e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5\" (UID: \"b46bef7a-7a08-49f8-a4ff-d6fae6ac588e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5" Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.898632 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b46bef7a-7a08-49f8-a4ff-d6fae6ac588e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5\" (UID: \"b46bef7a-7a08-49f8-a4ff-d6fae6ac588e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5" Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.898671 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zc2s\" (UniqueName: \"kubernetes.io/projected/b46bef7a-7a08-49f8-a4ff-d6fae6ac588e-kube-api-access-6zc2s\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5\" (UID: \"b46bef7a-7a08-49f8-a4ff-d6fae6ac588e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5" Dec 05 19:42:33 crc kubenswrapper[4828]: I1205 19:42:33.898939 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b46bef7a-7a08-49f8-a4ff-d6fae6ac588e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5\" (UID: \"b46bef7a-7a08-49f8-a4ff-d6fae6ac588e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5" Dec 05 19:42:34 crc kubenswrapper[4828]: I1205 19:42:34.000315 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b46bef7a-7a08-49f8-a4ff-d6fae6ac588e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5\" (UID: \"b46bef7a-7a08-49f8-a4ff-d6fae6ac588e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5" Dec 05 19:42:34 crc kubenswrapper[4828]: I1205 19:42:34.000369 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zc2s\" (UniqueName: \"kubernetes.io/projected/b46bef7a-7a08-49f8-a4ff-d6fae6ac588e-kube-api-access-6zc2s\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5\" (UID: \"b46bef7a-7a08-49f8-a4ff-d6fae6ac588e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5" Dec 05 19:42:34 crc kubenswrapper[4828]: I1205 19:42:34.000453 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b46bef7a-7a08-49f8-a4ff-d6fae6ac588e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5\" (UID: \"b46bef7a-7a08-49f8-a4ff-d6fae6ac588e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5" Dec 05 19:42:34 crc kubenswrapper[4828]: I1205 19:42:34.000511 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b46bef7a-7a08-49f8-a4ff-d6fae6ac588e-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5\" (UID: \"b46bef7a-7a08-49f8-a4ff-d6fae6ac588e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5" Dec 05 19:42:34 crc kubenswrapper[4828]: I1205 19:42:34.000548 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b46bef7a-7a08-49f8-a4ff-d6fae6ac588e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5\" (UID: \"b46bef7a-7a08-49f8-a4ff-d6fae6ac588e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5" Dec 05 19:42:34 crc kubenswrapper[4828]: I1205 19:42:34.004917 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b46bef7a-7a08-49f8-a4ff-d6fae6ac588e-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5\" (UID: \"b46bef7a-7a08-49f8-a4ff-d6fae6ac588e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5" Dec 05 19:42:34 crc kubenswrapper[4828]: I1205 19:42:34.005420 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b46bef7a-7a08-49f8-a4ff-d6fae6ac588e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5\" (UID: \"b46bef7a-7a08-49f8-a4ff-d6fae6ac588e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5" Dec 05 19:42:34 crc kubenswrapper[4828]: I1205 19:42:34.007269 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b46bef7a-7a08-49f8-a4ff-d6fae6ac588e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5\" (UID: \"b46bef7a-7a08-49f8-a4ff-d6fae6ac588e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5" Dec 05 19:42:34 crc kubenswrapper[4828]: I1205 19:42:34.010108 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b46bef7a-7a08-49f8-a4ff-d6fae6ac588e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5\" (UID: \"b46bef7a-7a08-49f8-a4ff-d6fae6ac588e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5" Dec 05 19:42:34 crc kubenswrapper[4828]: I1205 19:42:34.025685 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zc2s\" (UniqueName: \"kubernetes.io/projected/b46bef7a-7a08-49f8-a4ff-d6fae6ac588e-kube-api-access-6zc2s\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5\" (UID: \"b46bef7a-7a08-49f8-a4ff-d6fae6ac588e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5" Dec 05 19:42:34 crc kubenswrapper[4828]: I1205 19:42:34.128864 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5" Dec 05 19:42:34 crc kubenswrapper[4828]: I1205 19:42:34.717966 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5"] Dec 05 19:42:34 crc kubenswrapper[4828]: I1205 19:42:34.728533 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5" event={"ID":"b46bef7a-7a08-49f8-a4ff-d6fae6ac588e","Type":"ContainerStarted","Data":"34303be8646d1c338055ffb2dce329e18595f1d5c1a1c8a2f2cbd706a8097e6f"} Dec 05 19:42:35 crc kubenswrapper[4828]: I1205 19:42:35.744771 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5" event={"ID":"b46bef7a-7a08-49f8-a4ff-d6fae6ac588e","Type":"ContainerStarted","Data":"1c739cdac1dff38f17ebe0b786e40b73c570e9ab66bc3dccfcdc15f52b50e48a"} Dec 05 19:42:35 crc kubenswrapper[4828]: I1205 19:42:35.780652 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5" podStartSLOduration=2.264865587 podStartE2EDuration="2.780632794s" podCreationTimestamp="2025-12-05 19:42:33 +0000 UTC" firstStartedPulling="2025-12-05 19:42:34.721279583 +0000 UTC m=+2332.616501889" lastFinishedPulling="2025-12-05 19:42:35.2370468 +0000 UTC m=+2333.132269096" observedRunningTime="2025-12-05 19:42:35.769954114 +0000 UTC m=+2333.665176440" watchObservedRunningTime="2025-12-05 19:42:35.780632794 +0000 UTC m=+2333.675855100" Dec 05 19:42:37 crc kubenswrapper[4828]: I1205 19:42:37.446814 4828 scope.go:117] "RemoveContainer" containerID="678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116" Dec 05 19:42:37 crc kubenswrapper[4828]: E1205 19:42:37.447377 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:42:52 crc kubenswrapper[4828]: I1205 19:42:52.453170 4828 scope.go:117] "RemoveContainer" containerID="678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116" Dec 05 19:42:52 crc kubenswrapper[4828]: E1205 19:42:52.454108 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:43:03 crc kubenswrapper[4828]: I1205 19:43:03.447360 4828 scope.go:117] "RemoveContainer" containerID="678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116" Dec 05 19:43:03 crc kubenswrapper[4828]: E1205 19:43:03.448427 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:43:17 crc kubenswrapper[4828]: I1205 19:43:17.446707 4828 scope.go:117] "RemoveContainer" containerID="678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116" Dec 05 19:43:17 crc kubenswrapper[4828]: E1205 19:43:17.447583 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:43:31 crc kubenswrapper[4828]: I1205 19:43:31.447063 4828 scope.go:117] "RemoveContainer" containerID="678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116" Dec 05 19:43:31 crc kubenswrapper[4828]: E1205 19:43:31.447746 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:43:42 crc kubenswrapper[4828]: I1205 19:43:42.461350 4828 scope.go:117] "RemoveContainer" containerID="678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116" Dec 05 19:43:42 crc kubenswrapper[4828]: E1205 19:43:42.468522 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:43:55 crc kubenswrapper[4828]: I1205 19:43:55.447011 4828 scope.go:117] "RemoveContainer" containerID="678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116" Dec 05 19:43:55 crc kubenswrapper[4828]: E1205 19:43:55.447866 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:44:06 crc kubenswrapper[4828]: I1205 19:44:06.446695 4828 scope.go:117] "RemoveContainer" containerID="678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116" Dec 05 19:44:06 crc kubenswrapper[4828]: E1205 19:44:06.447642 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:44:16 crc kubenswrapper[4828]: I1205 19:44:16.717033 4828 generic.go:334] "Generic (PLEG): container finished" podID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" containerID="a779f86b0916f1e6e49b4ba52144379301c8e78e68e352632dc079017c30d154" exitCode=1 Dec 05 19:44:16 crc kubenswrapper[4828]: I1205 19:44:16.717167 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" event={"ID":"03c4fc5d-6be1-47b4-9c39-7bb86046dafd","Type":"ContainerDied","Data":"a779f86b0916f1e6e49b4ba52144379301c8e78e68e352632dc079017c30d154"} Dec 05 19:44:16 crc kubenswrapper[4828]: I1205 19:44:16.717785 4828 scope.go:117] "RemoveContainer" containerID="38e4dac2d6a881889bb348b79d1e1f1c0a83324dc2bf7d6fb1d1128a0cd7ea6d" Dec 05 19:44:16 crc kubenswrapper[4828]: I1205 19:44:16.718469 4828 scope.go:117] "RemoveContainer" containerID="a779f86b0916f1e6e49b4ba52144379301c8e78e68e352632dc079017c30d154" Dec 05 19:44:16 crc kubenswrapper[4828]: E1205 19:44:16.718774 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:44:18 crc kubenswrapper[4828]: I1205 19:44:18.447112 4828 scope.go:117] "RemoveContainer" containerID="678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116" Dec 05 19:44:18 crc kubenswrapper[4828]: E1205 19:44:18.448061 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:44:25 crc kubenswrapper[4828]: I1205 19:44:25.117489 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:44:25 crc kubenswrapper[4828]: I1205 19:44:25.118049 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:44:25 crc kubenswrapper[4828]: I1205 19:44:25.118782 4828 scope.go:117] "RemoveContainer" containerID="a779f86b0916f1e6e49b4ba52144379301c8e78e68e352632dc079017c30d154" Dec 05 19:44:25 crc kubenswrapper[4828]: E1205 19:44:25.119079 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:44:32 crc kubenswrapper[4828]: I1205 19:44:32.453770 4828 scope.go:117] "RemoveContainer" containerID="678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116" Dec 05 19:44:32 crc kubenswrapper[4828]: E1205 19:44:32.454461 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:44:35 crc kubenswrapper[4828]: I1205 19:44:35.447210 4828 scope.go:117] "RemoveContainer" containerID="a779f86b0916f1e6e49b4ba52144379301c8e78e68e352632dc079017c30d154" Dec 05 19:44:35 crc kubenswrapper[4828]: E1205 19:44:35.449729 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:44:43 crc kubenswrapper[4828]: I1205 19:44:43.446301 4828 scope.go:117] "RemoveContainer" containerID="678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116" Dec 05 19:44:43 crc kubenswrapper[4828]: E1205 19:44:43.447906 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:44:47 crc kubenswrapper[4828]: I1205 19:44:47.447244 4828 scope.go:117] "RemoveContainer" containerID="a779f86b0916f1e6e49b4ba52144379301c8e78e68e352632dc079017c30d154" Dec 05 19:44:47 crc kubenswrapper[4828]: E1205 19:44:47.448032 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:44:54 crc kubenswrapper[4828]: I1205 19:44:54.447102 4828 scope.go:117] "RemoveContainer" containerID="678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116" Dec 05 19:44:54 crc kubenswrapper[4828]: E1205 19:44:54.447696 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:44:58 crc kubenswrapper[4828]: I1205 19:44:58.447271 4828 scope.go:117] "RemoveContainer" containerID="a779f86b0916f1e6e49b4ba52144379301c8e78e68e352632dc079017c30d154" Dec 05 19:44:58 crc kubenswrapper[4828]: E1205 19:44:58.448089 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:45:00 crc kubenswrapper[4828]: I1205 19:45:00.159452 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416065-xll7p"] Dec 05 19:45:00 crc kubenswrapper[4828]: I1205 19:45:00.160877 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416065-xll7p" Dec 05 19:45:00 crc kubenswrapper[4828]: I1205 19:45:00.163329 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 19:45:00 crc kubenswrapper[4828]: I1205 19:45:00.163532 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 19:45:00 crc kubenswrapper[4828]: I1205 19:45:00.185626 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416065-xll7p"] Dec 05 19:45:00 crc kubenswrapper[4828]: I1205 19:45:00.274753 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/139d311c-2633-44ca-b89c-36079ecb4e85-secret-volume\") pod \"collect-profiles-29416065-xll7p\" (UID: \"139d311c-2633-44ca-b89c-36079ecb4e85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416065-xll7p" Dec 05 19:45:00 crc kubenswrapper[4828]: I1205 19:45:00.275333 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz7xd\" (UniqueName: \"kubernetes.io/projected/139d311c-2633-44ca-b89c-36079ecb4e85-kube-api-access-xz7xd\") pod \"collect-profiles-29416065-xll7p\" (UID: \"139d311c-2633-44ca-b89c-36079ecb4e85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416065-xll7p" Dec 05 19:45:00 crc kubenswrapper[4828]: I1205 19:45:00.275462 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/139d311c-2633-44ca-b89c-36079ecb4e85-config-volume\") pod \"collect-profiles-29416065-xll7p\" (UID: \"139d311c-2633-44ca-b89c-36079ecb4e85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416065-xll7p" Dec 05 19:45:00 crc kubenswrapper[4828]: I1205 19:45:00.377632 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/139d311c-2633-44ca-b89c-36079ecb4e85-config-volume\") pod \"collect-profiles-29416065-xll7p\" (UID: \"139d311c-2633-44ca-b89c-36079ecb4e85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416065-xll7p" Dec 05 19:45:00 crc kubenswrapper[4828]: I1205 19:45:00.377734 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/139d311c-2633-44ca-b89c-36079ecb4e85-secret-volume\") pod \"collect-profiles-29416065-xll7p\" (UID: \"139d311c-2633-44ca-b89c-36079ecb4e85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416065-xll7p" Dec 05 19:45:00 crc kubenswrapper[4828]: I1205 19:45:00.377867 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz7xd\" (UniqueName: \"kubernetes.io/projected/139d311c-2633-44ca-b89c-36079ecb4e85-kube-api-access-xz7xd\") pod \"collect-profiles-29416065-xll7p\" (UID: \"139d311c-2633-44ca-b89c-36079ecb4e85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416065-xll7p" Dec 05 19:45:00 crc kubenswrapper[4828]: I1205 19:45:00.379176 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/139d311c-2633-44ca-b89c-36079ecb4e85-config-volume\") pod \"collect-profiles-29416065-xll7p\" (UID: \"139d311c-2633-44ca-b89c-36079ecb4e85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416065-xll7p" Dec 05 19:45:00 crc kubenswrapper[4828]: I1205 19:45:00.386756 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/139d311c-2633-44ca-b89c-36079ecb4e85-secret-volume\") pod \"collect-profiles-29416065-xll7p\" (UID: \"139d311c-2633-44ca-b89c-36079ecb4e85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416065-xll7p" Dec 05 19:45:00 crc kubenswrapper[4828]: I1205 19:45:00.393609 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz7xd\" (UniqueName: \"kubernetes.io/projected/139d311c-2633-44ca-b89c-36079ecb4e85-kube-api-access-xz7xd\") pod \"collect-profiles-29416065-xll7p\" (UID: \"139d311c-2633-44ca-b89c-36079ecb4e85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416065-xll7p" Dec 05 19:45:00 crc kubenswrapper[4828]: I1205 19:45:00.503668 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416065-xll7p" Dec 05 19:45:00 crc kubenswrapper[4828]: I1205 19:45:00.953586 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416065-xll7p"] Dec 05 19:45:01 crc kubenswrapper[4828]: I1205 19:45:01.163146 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416065-xll7p" event={"ID":"139d311c-2633-44ca-b89c-36079ecb4e85","Type":"ContainerStarted","Data":"c14336c5d49383bc927ca7f21446107cc72bafb2849cd094db06d75bdc120a26"} Dec 05 19:45:01 crc kubenswrapper[4828]: I1205 19:45:01.163207 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416065-xll7p" event={"ID":"139d311c-2633-44ca-b89c-36079ecb4e85","Type":"ContainerStarted","Data":"1bc6d2a74dd655d34fc29a013a9a6b35351aeb41c8b2e986591cc6e8f5410bdb"} Dec 05 19:45:01 crc kubenswrapper[4828]: I1205 19:45:01.187376 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29416065-xll7p" podStartSLOduration=1.187351423 podStartE2EDuration="1.187351423s" podCreationTimestamp="2025-12-05 19:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 19:45:01.177833316 +0000 UTC m=+2479.073055622" watchObservedRunningTime="2025-12-05 19:45:01.187351423 +0000 UTC m=+2479.082573739" Dec 05 19:45:02 crc kubenswrapper[4828]: I1205 19:45:02.173234 4828 generic.go:334] "Generic (PLEG): container finished" podID="139d311c-2633-44ca-b89c-36079ecb4e85" containerID="c14336c5d49383bc927ca7f21446107cc72bafb2849cd094db06d75bdc120a26" exitCode=0 Dec 05 19:45:02 crc kubenswrapper[4828]: I1205 19:45:02.173325 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416065-xll7p" event={"ID":"139d311c-2633-44ca-b89c-36079ecb4e85","Type":"ContainerDied","Data":"c14336c5d49383bc927ca7f21446107cc72bafb2849cd094db06d75bdc120a26"} Dec 05 19:45:03 crc kubenswrapper[4828]: I1205 19:45:03.559258 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416065-xll7p" Dec 05 19:45:03 crc kubenswrapper[4828]: I1205 19:45:03.651444 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz7xd\" (UniqueName: \"kubernetes.io/projected/139d311c-2633-44ca-b89c-36079ecb4e85-kube-api-access-xz7xd\") pod \"139d311c-2633-44ca-b89c-36079ecb4e85\" (UID: \"139d311c-2633-44ca-b89c-36079ecb4e85\") " Dec 05 19:45:03 crc kubenswrapper[4828]: I1205 19:45:03.651866 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/139d311c-2633-44ca-b89c-36079ecb4e85-secret-volume\") pod \"139d311c-2633-44ca-b89c-36079ecb4e85\" (UID: \"139d311c-2633-44ca-b89c-36079ecb4e85\") " Dec 05 19:45:03 crc kubenswrapper[4828]: I1205 19:45:03.651917 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/139d311c-2633-44ca-b89c-36079ecb4e85-config-volume\") pod \"139d311c-2633-44ca-b89c-36079ecb4e85\" (UID: \"139d311c-2633-44ca-b89c-36079ecb4e85\") " Dec 05 19:45:03 crc kubenswrapper[4828]: I1205 19:45:03.653005 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/139d311c-2633-44ca-b89c-36079ecb4e85-config-volume" (OuterVolumeSpecName: "config-volume") pod "139d311c-2633-44ca-b89c-36079ecb4e85" (UID: "139d311c-2633-44ca-b89c-36079ecb4e85"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:45:03 crc kubenswrapper[4828]: I1205 19:45:03.658372 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/139d311c-2633-44ca-b89c-36079ecb4e85-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "139d311c-2633-44ca-b89c-36079ecb4e85" (UID: "139d311c-2633-44ca-b89c-36079ecb4e85"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:45:03 crc kubenswrapper[4828]: I1205 19:45:03.676240 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/139d311c-2633-44ca-b89c-36079ecb4e85-kube-api-access-xz7xd" (OuterVolumeSpecName: "kube-api-access-xz7xd") pod "139d311c-2633-44ca-b89c-36079ecb4e85" (UID: "139d311c-2633-44ca-b89c-36079ecb4e85"). InnerVolumeSpecName "kube-api-access-xz7xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:45:03 crc kubenswrapper[4828]: I1205 19:45:03.754012 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz7xd\" (UniqueName: \"kubernetes.io/projected/139d311c-2633-44ca-b89c-36079ecb4e85-kube-api-access-xz7xd\") on node \"crc\" DevicePath \"\"" Dec 05 19:45:03 crc kubenswrapper[4828]: I1205 19:45:03.754046 4828 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/139d311c-2633-44ca-b89c-36079ecb4e85-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 19:45:03 crc kubenswrapper[4828]: I1205 19:45:03.754059 4828 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/139d311c-2633-44ca-b89c-36079ecb4e85-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 19:45:04 crc kubenswrapper[4828]: I1205 19:45:04.216197 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416065-xll7p" event={"ID":"139d311c-2633-44ca-b89c-36079ecb4e85","Type":"ContainerDied","Data":"1bc6d2a74dd655d34fc29a013a9a6b35351aeb41c8b2e986591cc6e8f5410bdb"} Dec 05 19:45:04 crc kubenswrapper[4828]: I1205 19:45:04.216239 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bc6d2a74dd655d34fc29a013a9a6b35351aeb41c8b2e986591cc6e8f5410bdb" Dec 05 19:45:04 crc kubenswrapper[4828]: I1205 19:45:04.216315 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416065-xll7p" Dec 05 19:45:04 crc kubenswrapper[4828]: I1205 19:45:04.262155 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416020-s5gks"] Dec 05 19:45:04 crc kubenswrapper[4828]: I1205 19:45:04.270794 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416020-s5gks"] Dec 05 19:45:04 crc kubenswrapper[4828]: I1205 19:45:04.474634 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5ed41b4-64e6-407a-b3a5-104f2b97b008" path="/var/lib/kubelet/pods/b5ed41b4-64e6-407a-b3a5-104f2b97b008/volumes" Dec 05 19:45:08 crc kubenswrapper[4828]: I1205 19:45:08.446866 4828 scope.go:117] "RemoveContainer" containerID="678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116" Dec 05 19:45:08 crc kubenswrapper[4828]: E1205 19:45:08.448501 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:45:10 crc kubenswrapper[4828]: I1205 19:45:10.449256 4828 scope.go:117] "RemoveContainer" containerID="a779f86b0916f1e6e49b4ba52144379301c8e78e68e352632dc079017c30d154" Dec 05 19:45:10 crc kubenswrapper[4828]: E1205 19:45:10.450109 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:45:22 crc kubenswrapper[4828]: I1205 19:45:22.221413 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6wskh"] Dec 05 19:45:22 crc kubenswrapper[4828]: E1205 19:45:22.224779 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="139d311c-2633-44ca-b89c-36079ecb4e85" containerName="collect-profiles" Dec 05 19:45:22 crc kubenswrapper[4828]: I1205 19:45:22.224902 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="139d311c-2633-44ca-b89c-36079ecb4e85" containerName="collect-profiles" Dec 05 19:45:22 crc kubenswrapper[4828]: I1205 19:45:22.225145 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="139d311c-2633-44ca-b89c-36079ecb4e85" containerName="collect-profiles" Dec 05 19:45:22 crc kubenswrapper[4828]: I1205 19:45:22.226558 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6wskh" Dec 05 19:45:22 crc kubenswrapper[4828]: I1205 19:45:22.241034 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6wskh"] Dec 05 19:45:22 crc kubenswrapper[4828]: I1205 19:45:22.387201 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lq6b\" (UniqueName: \"kubernetes.io/projected/b159a673-7b19-4aca-8725-2918e15c8629-kube-api-access-7lq6b\") pod \"redhat-marketplace-6wskh\" (UID: \"b159a673-7b19-4aca-8725-2918e15c8629\") " pod="openshift-marketplace/redhat-marketplace-6wskh" Dec 05 19:45:22 crc kubenswrapper[4828]: I1205 19:45:22.387414 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b159a673-7b19-4aca-8725-2918e15c8629-catalog-content\") pod \"redhat-marketplace-6wskh\" (UID: \"b159a673-7b19-4aca-8725-2918e15c8629\") " pod="openshift-marketplace/redhat-marketplace-6wskh" Dec 05 19:45:22 crc kubenswrapper[4828]: I1205 19:45:22.387622 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b159a673-7b19-4aca-8725-2918e15c8629-utilities\") pod \"redhat-marketplace-6wskh\" (UID: \"b159a673-7b19-4aca-8725-2918e15c8629\") " pod="openshift-marketplace/redhat-marketplace-6wskh" Dec 05 19:45:22 crc kubenswrapper[4828]: I1205 19:45:22.452290 4828 scope.go:117] "RemoveContainer" containerID="a779f86b0916f1e6e49b4ba52144379301c8e78e68e352632dc079017c30d154" Dec 05 19:45:22 crc kubenswrapper[4828]: I1205 19:45:22.452420 4828 scope.go:117] "RemoveContainer" containerID="678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116" Dec 05 19:45:22 crc kubenswrapper[4828]: E1205 19:45:22.452538 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:45:22 crc kubenswrapper[4828]: E1205 19:45:22.452655 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:45:22 crc kubenswrapper[4828]: I1205 19:45:22.489612 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b159a673-7b19-4aca-8725-2918e15c8629-utilities\") pod \"redhat-marketplace-6wskh\" (UID: \"b159a673-7b19-4aca-8725-2918e15c8629\") " pod="openshift-marketplace/redhat-marketplace-6wskh" Dec 05 19:45:22 crc kubenswrapper[4828]: I1205 19:45:22.489708 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lq6b\" (UniqueName: \"kubernetes.io/projected/b159a673-7b19-4aca-8725-2918e15c8629-kube-api-access-7lq6b\") pod \"redhat-marketplace-6wskh\" (UID: \"b159a673-7b19-4aca-8725-2918e15c8629\") " pod="openshift-marketplace/redhat-marketplace-6wskh" Dec 05 19:45:22 crc kubenswrapper[4828]: I1205 19:45:22.489768 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b159a673-7b19-4aca-8725-2918e15c8629-catalog-content\") pod \"redhat-marketplace-6wskh\" (UID: \"b159a673-7b19-4aca-8725-2918e15c8629\") " pod="openshift-marketplace/redhat-marketplace-6wskh" Dec 05 19:45:22 crc kubenswrapper[4828]: I1205 19:45:22.490218 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b159a673-7b19-4aca-8725-2918e15c8629-catalog-content\") pod \"redhat-marketplace-6wskh\" (UID: \"b159a673-7b19-4aca-8725-2918e15c8629\") " pod="openshift-marketplace/redhat-marketplace-6wskh" Dec 05 19:45:22 crc kubenswrapper[4828]: I1205 19:45:22.490265 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b159a673-7b19-4aca-8725-2918e15c8629-utilities\") pod \"redhat-marketplace-6wskh\" (UID: \"b159a673-7b19-4aca-8725-2918e15c8629\") " pod="openshift-marketplace/redhat-marketplace-6wskh" Dec 05 19:45:22 crc kubenswrapper[4828]: I1205 19:45:22.512062 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lq6b\" (UniqueName: \"kubernetes.io/projected/b159a673-7b19-4aca-8725-2918e15c8629-kube-api-access-7lq6b\") pod \"redhat-marketplace-6wskh\" (UID: \"b159a673-7b19-4aca-8725-2918e15c8629\") " pod="openshift-marketplace/redhat-marketplace-6wskh" Dec 05 19:45:22 crc kubenswrapper[4828]: I1205 19:45:22.548573 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6wskh" Dec 05 19:45:23 crc kubenswrapper[4828]: I1205 19:45:23.037788 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6wskh"] Dec 05 19:45:23 crc kubenswrapper[4828]: I1205 19:45:23.626753 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dcn2z"] Dec 05 19:45:23 crc kubenswrapper[4828]: I1205 19:45:23.629032 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dcn2z" Dec 05 19:45:23 crc kubenswrapper[4828]: I1205 19:45:23.643799 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dcn2z"] Dec 05 19:45:23 crc kubenswrapper[4828]: I1205 19:45:23.651612 4828 generic.go:334] "Generic (PLEG): container finished" podID="b159a673-7b19-4aca-8725-2918e15c8629" containerID="bde4a711ba137c529743ba80f510d95bf15b65672ebca060d46991d18cd06964" exitCode=0 Dec 05 19:45:23 crc kubenswrapper[4828]: I1205 19:45:23.651665 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wskh" event={"ID":"b159a673-7b19-4aca-8725-2918e15c8629","Type":"ContainerDied","Data":"bde4a711ba137c529743ba80f510d95bf15b65672ebca060d46991d18cd06964"} Dec 05 19:45:23 crc kubenswrapper[4828]: I1205 19:45:23.651699 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wskh" event={"ID":"b159a673-7b19-4aca-8725-2918e15c8629","Type":"ContainerStarted","Data":"715b9f2f2b9b7219dd4909404299c245f9616db1323814ce06ddd656a9bbf270"} Dec 05 19:45:23 crc kubenswrapper[4828]: I1205 19:45:23.826346 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75e760b7-c8b1-41ad-bc57-2b023d569db1-utilities\") pod \"redhat-operators-dcn2z\" (UID: \"75e760b7-c8b1-41ad-bc57-2b023d569db1\") " pod="openshift-marketplace/redhat-operators-dcn2z" Dec 05 19:45:23 crc kubenswrapper[4828]: I1205 19:45:23.826422 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75e760b7-c8b1-41ad-bc57-2b023d569db1-catalog-content\") pod \"redhat-operators-dcn2z\" (UID: \"75e760b7-c8b1-41ad-bc57-2b023d569db1\") " pod="openshift-marketplace/redhat-operators-dcn2z" Dec 05 19:45:23 crc kubenswrapper[4828]: I1205 19:45:23.826516 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zktsz\" (UniqueName: \"kubernetes.io/projected/75e760b7-c8b1-41ad-bc57-2b023d569db1-kube-api-access-zktsz\") pod \"redhat-operators-dcn2z\" (UID: \"75e760b7-c8b1-41ad-bc57-2b023d569db1\") " pod="openshift-marketplace/redhat-operators-dcn2z" Dec 05 19:45:23 crc kubenswrapper[4828]: I1205 19:45:23.928545 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zktsz\" (UniqueName: \"kubernetes.io/projected/75e760b7-c8b1-41ad-bc57-2b023d569db1-kube-api-access-zktsz\") pod \"redhat-operators-dcn2z\" (UID: \"75e760b7-c8b1-41ad-bc57-2b023d569db1\") " pod="openshift-marketplace/redhat-operators-dcn2z" Dec 05 19:45:23 crc kubenswrapper[4828]: I1205 19:45:23.928631 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75e760b7-c8b1-41ad-bc57-2b023d569db1-utilities\") pod \"redhat-operators-dcn2z\" (UID: \"75e760b7-c8b1-41ad-bc57-2b023d569db1\") " pod="openshift-marketplace/redhat-operators-dcn2z" Dec 05 19:45:23 crc kubenswrapper[4828]: I1205 19:45:23.928690 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75e760b7-c8b1-41ad-bc57-2b023d569db1-catalog-content\") pod \"redhat-operators-dcn2z\" (UID: \"75e760b7-c8b1-41ad-bc57-2b023d569db1\") " pod="openshift-marketplace/redhat-operators-dcn2z" Dec 05 19:45:23 crc kubenswrapper[4828]: I1205 19:45:23.929146 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75e760b7-c8b1-41ad-bc57-2b023d569db1-utilities\") pod \"redhat-operators-dcn2z\" (UID: \"75e760b7-c8b1-41ad-bc57-2b023d569db1\") " pod="openshift-marketplace/redhat-operators-dcn2z" Dec 05 19:45:23 crc kubenswrapper[4828]: I1205 19:45:23.929258 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75e760b7-c8b1-41ad-bc57-2b023d569db1-catalog-content\") pod \"redhat-operators-dcn2z\" (UID: \"75e760b7-c8b1-41ad-bc57-2b023d569db1\") " pod="openshift-marketplace/redhat-operators-dcn2z" Dec 05 19:45:23 crc kubenswrapper[4828]: I1205 19:45:23.950129 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zktsz\" (UniqueName: \"kubernetes.io/projected/75e760b7-c8b1-41ad-bc57-2b023d569db1-kube-api-access-zktsz\") pod \"redhat-operators-dcn2z\" (UID: \"75e760b7-c8b1-41ad-bc57-2b023d569db1\") " pod="openshift-marketplace/redhat-operators-dcn2z" Dec 05 19:45:23 crc kubenswrapper[4828]: I1205 19:45:23.971008 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dcn2z" Dec 05 19:45:24 crc kubenswrapper[4828]: I1205 19:45:24.446113 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dcn2z"] Dec 05 19:45:24 crc kubenswrapper[4828]: W1205 19:45:24.456205 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75e760b7_c8b1_41ad_bc57_2b023d569db1.slice/crio-95da673f23e4c14df5fb1a34b9a2ba62c619a6d3a7f8060120a1e869278ec64a WatchSource:0}: Error finding container 95da673f23e4c14df5fb1a34b9a2ba62c619a6d3a7f8060120a1e869278ec64a: Status 404 returned error can't find the container with id 95da673f23e4c14df5fb1a34b9a2ba62c619a6d3a7f8060120a1e869278ec64a Dec 05 19:45:24 crc kubenswrapper[4828]: I1205 19:45:24.663543 4828 generic.go:334] "Generic (PLEG): container finished" podID="b159a673-7b19-4aca-8725-2918e15c8629" containerID="ce358b3af208203f0cdb1ccebdad9b3c1e9fd8fc5d0fd047fad397a1e540e4e3" exitCode=0 Dec 05 19:45:24 crc kubenswrapper[4828]: I1205 19:45:24.663624 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wskh" event={"ID":"b159a673-7b19-4aca-8725-2918e15c8629","Type":"ContainerDied","Data":"ce358b3af208203f0cdb1ccebdad9b3c1e9fd8fc5d0fd047fad397a1e540e4e3"} Dec 05 19:45:24 crc kubenswrapper[4828]: I1205 19:45:24.665227 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcn2z" event={"ID":"75e760b7-c8b1-41ad-bc57-2b023d569db1","Type":"ContainerStarted","Data":"95da673f23e4c14df5fb1a34b9a2ba62c619a6d3a7f8060120a1e869278ec64a"} Dec 05 19:45:25 crc kubenswrapper[4828]: I1205 19:45:25.683370 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wskh" event={"ID":"b159a673-7b19-4aca-8725-2918e15c8629","Type":"ContainerStarted","Data":"e765dae14e4450cc4d4096ca11a89a307ab2ff6e20fcdc49495c1ac79377da4e"} Dec 05 19:45:25 crc kubenswrapper[4828]: I1205 19:45:25.688057 4828 generic.go:334] "Generic (PLEG): container finished" podID="75e760b7-c8b1-41ad-bc57-2b023d569db1" containerID="4c303da4a2ceea26530058e4bcefd72a21a0f2fe7c0142cc8367a44632e69c6b" exitCode=0 Dec 05 19:45:25 crc kubenswrapper[4828]: I1205 19:45:25.688119 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcn2z" event={"ID":"75e760b7-c8b1-41ad-bc57-2b023d569db1","Type":"ContainerDied","Data":"4c303da4a2ceea26530058e4bcefd72a21a0f2fe7c0142cc8367a44632e69c6b"} Dec 05 19:45:25 crc kubenswrapper[4828]: I1205 19:45:25.718563 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6wskh" podStartSLOduration=2.284799887 podStartE2EDuration="3.718540888s" podCreationTimestamp="2025-12-05 19:45:22 +0000 UTC" firstStartedPulling="2025-12-05 19:45:23.653419276 +0000 UTC m=+2501.548641582" lastFinishedPulling="2025-12-05 19:45:25.087160277 +0000 UTC m=+2502.982382583" observedRunningTime="2025-12-05 19:45:25.706080431 +0000 UTC m=+2503.601302757" watchObservedRunningTime="2025-12-05 19:45:25.718540888 +0000 UTC m=+2503.613763204" Dec 05 19:45:26 crc kubenswrapper[4828]: I1205 19:45:26.698145 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcn2z" event={"ID":"75e760b7-c8b1-41ad-bc57-2b023d569db1","Type":"ContainerStarted","Data":"18277d9e729cf74f796122c0c4fcfd325265f5fe436bfdeab141b22a2f3390fe"} Dec 05 19:45:27 crc kubenswrapper[4828]: I1205 19:45:27.709035 4828 generic.go:334] "Generic (PLEG): container finished" podID="75e760b7-c8b1-41ad-bc57-2b023d569db1" containerID="18277d9e729cf74f796122c0c4fcfd325265f5fe436bfdeab141b22a2f3390fe" exitCode=0 Dec 05 19:45:27 crc kubenswrapper[4828]: I1205 19:45:27.709148 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcn2z" event={"ID":"75e760b7-c8b1-41ad-bc57-2b023d569db1","Type":"ContainerDied","Data":"18277d9e729cf74f796122c0c4fcfd325265f5fe436bfdeab141b22a2f3390fe"} Dec 05 19:45:28 crc kubenswrapper[4828]: I1205 19:45:28.722305 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcn2z" event={"ID":"75e760b7-c8b1-41ad-bc57-2b023d569db1","Type":"ContainerStarted","Data":"f944dce21c5c6a0560aa41a5f128277fc7feec26b978753666a3276da59ed392"} Dec 05 19:45:28 crc kubenswrapper[4828]: I1205 19:45:28.744543 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dcn2z" podStartSLOduration=3.119404511 podStartE2EDuration="5.744524215s" podCreationTimestamp="2025-12-05 19:45:23 +0000 UTC" firstStartedPulling="2025-12-05 19:45:25.691109437 +0000 UTC m=+2503.586331783" lastFinishedPulling="2025-12-05 19:45:28.316229181 +0000 UTC m=+2506.211451487" observedRunningTime="2025-12-05 19:45:28.740431144 +0000 UTC m=+2506.635653450" watchObservedRunningTime="2025-12-05 19:45:28.744524215 +0000 UTC m=+2506.639746521" Dec 05 19:45:32 crc kubenswrapper[4828]: I1205 19:45:32.549467 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6wskh" Dec 05 19:45:32 crc kubenswrapper[4828]: I1205 19:45:32.549526 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6wskh" Dec 05 19:45:32 crc kubenswrapper[4828]: I1205 19:45:32.613463 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6wskh" Dec 05 19:45:32 crc kubenswrapper[4828]: I1205 19:45:32.802376 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6wskh" Dec 05 19:45:33 crc kubenswrapper[4828]: I1205 19:45:33.209947 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6wskh"] Dec 05 19:45:33 crc kubenswrapper[4828]: I1205 19:45:33.816943 4828 scope.go:117] "RemoveContainer" containerID="9802d3655f2ed9c9a92b4650dee78a473b5b30178c1546757deaa1bf9b8f1f6b" Dec 05 19:45:33 crc kubenswrapper[4828]: I1205 19:45:33.972251 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dcn2z" Dec 05 19:45:33 crc kubenswrapper[4828]: I1205 19:45:33.972307 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dcn2z" Dec 05 19:45:34 crc kubenswrapper[4828]: I1205 19:45:34.023998 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dcn2z" Dec 05 19:45:34 crc kubenswrapper[4828]: I1205 19:45:34.771374 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6wskh" podUID="b159a673-7b19-4aca-8725-2918e15c8629" containerName="registry-server" containerID="cri-o://e765dae14e4450cc4d4096ca11a89a307ab2ff6e20fcdc49495c1ac79377da4e" gracePeriod=2 Dec 05 19:45:34 crc kubenswrapper[4828]: I1205 19:45:34.816194 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dcn2z" Dec 05 19:45:35 crc kubenswrapper[4828]: I1205 19:45:35.307321 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6wskh" Dec 05 19:45:35 crc kubenswrapper[4828]: I1205 19:45:35.374451 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lq6b\" (UniqueName: \"kubernetes.io/projected/b159a673-7b19-4aca-8725-2918e15c8629-kube-api-access-7lq6b\") pod \"b159a673-7b19-4aca-8725-2918e15c8629\" (UID: \"b159a673-7b19-4aca-8725-2918e15c8629\") " Dec 05 19:45:35 crc kubenswrapper[4828]: I1205 19:45:35.374560 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b159a673-7b19-4aca-8725-2918e15c8629-catalog-content\") pod \"b159a673-7b19-4aca-8725-2918e15c8629\" (UID: \"b159a673-7b19-4aca-8725-2918e15c8629\") " Dec 05 19:45:35 crc kubenswrapper[4828]: I1205 19:45:35.374802 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b159a673-7b19-4aca-8725-2918e15c8629-utilities\") pod \"b159a673-7b19-4aca-8725-2918e15c8629\" (UID: \"b159a673-7b19-4aca-8725-2918e15c8629\") " Dec 05 19:45:35 crc kubenswrapper[4828]: I1205 19:45:35.376251 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b159a673-7b19-4aca-8725-2918e15c8629-utilities" (OuterVolumeSpecName: "utilities") pod "b159a673-7b19-4aca-8725-2918e15c8629" (UID: "b159a673-7b19-4aca-8725-2918e15c8629"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:45:35 crc kubenswrapper[4828]: I1205 19:45:35.380186 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b159a673-7b19-4aca-8725-2918e15c8629-kube-api-access-7lq6b" (OuterVolumeSpecName: "kube-api-access-7lq6b") pod "b159a673-7b19-4aca-8725-2918e15c8629" (UID: "b159a673-7b19-4aca-8725-2918e15c8629"). InnerVolumeSpecName "kube-api-access-7lq6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:45:35 crc kubenswrapper[4828]: I1205 19:45:35.403243 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b159a673-7b19-4aca-8725-2918e15c8629-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b159a673-7b19-4aca-8725-2918e15c8629" (UID: "b159a673-7b19-4aca-8725-2918e15c8629"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:45:35 crc kubenswrapper[4828]: I1205 19:45:35.477096 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b159a673-7b19-4aca-8725-2918e15c8629-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 19:45:35 crc kubenswrapper[4828]: I1205 19:45:35.477129 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b159a673-7b19-4aca-8725-2918e15c8629-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 19:45:35 crc kubenswrapper[4828]: I1205 19:45:35.477140 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lq6b\" (UniqueName: \"kubernetes.io/projected/b159a673-7b19-4aca-8725-2918e15c8629-kube-api-access-7lq6b\") on node \"crc\" DevicePath \"\"" Dec 05 19:45:35 crc kubenswrapper[4828]: I1205 19:45:35.783307 4828 generic.go:334] "Generic (PLEG): container finished" podID="b159a673-7b19-4aca-8725-2918e15c8629" containerID="e765dae14e4450cc4d4096ca11a89a307ab2ff6e20fcdc49495c1ac79377da4e" exitCode=0 Dec 05 19:45:35 crc kubenswrapper[4828]: I1205 19:45:35.783377 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6wskh" Dec 05 19:45:35 crc kubenswrapper[4828]: I1205 19:45:35.783367 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wskh" event={"ID":"b159a673-7b19-4aca-8725-2918e15c8629","Type":"ContainerDied","Data":"e765dae14e4450cc4d4096ca11a89a307ab2ff6e20fcdc49495c1ac79377da4e"} Dec 05 19:45:35 crc kubenswrapper[4828]: I1205 19:45:35.783491 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wskh" event={"ID":"b159a673-7b19-4aca-8725-2918e15c8629","Type":"ContainerDied","Data":"715b9f2f2b9b7219dd4909404299c245f9616db1323814ce06ddd656a9bbf270"} Dec 05 19:45:35 crc kubenswrapper[4828]: I1205 19:45:35.783508 4828 scope.go:117] "RemoveContainer" containerID="e765dae14e4450cc4d4096ca11a89a307ab2ff6e20fcdc49495c1ac79377da4e" Dec 05 19:45:35 crc kubenswrapper[4828]: I1205 19:45:35.806577 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dcn2z"] Dec 05 19:45:35 crc kubenswrapper[4828]: I1205 19:45:35.826206 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6wskh"] Dec 05 19:45:35 crc kubenswrapper[4828]: I1205 19:45:35.826887 4828 scope.go:117] "RemoveContainer" containerID="ce358b3af208203f0cdb1ccebdad9b3c1e9fd8fc5d0fd047fad397a1e540e4e3" Dec 05 19:45:35 crc kubenswrapper[4828]: I1205 19:45:35.837732 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6wskh"] Dec 05 19:45:35 crc kubenswrapper[4828]: I1205 19:45:35.848067 4828 scope.go:117] "RemoveContainer" containerID="bde4a711ba137c529743ba80f510d95bf15b65672ebca060d46991d18cd06964" Dec 05 19:45:35 crc kubenswrapper[4828]: I1205 19:45:35.909429 4828 scope.go:117] "RemoveContainer" containerID="e765dae14e4450cc4d4096ca11a89a307ab2ff6e20fcdc49495c1ac79377da4e" Dec 05 19:45:35 crc kubenswrapper[4828]: E1205 19:45:35.911440 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e765dae14e4450cc4d4096ca11a89a307ab2ff6e20fcdc49495c1ac79377da4e\": container with ID starting with e765dae14e4450cc4d4096ca11a89a307ab2ff6e20fcdc49495c1ac79377da4e not found: ID does not exist" containerID="e765dae14e4450cc4d4096ca11a89a307ab2ff6e20fcdc49495c1ac79377da4e" Dec 05 19:45:35 crc kubenswrapper[4828]: I1205 19:45:35.911487 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e765dae14e4450cc4d4096ca11a89a307ab2ff6e20fcdc49495c1ac79377da4e"} err="failed to get container status \"e765dae14e4450cc4d4096ca11a89a307ab2ff6e20fcdc49495c1ac79377da4e\": rpc error: code = NotFound desc = could not find container \"e765dae14e4450cc4d4096ca11a89a307ab2ff6e20fcdc49495c1ac79377da4e\": container with ID starting with e765dae14e4450cc4d4096ca11a89a307ab2ff6e20fcdc49495c1ac79377da4e not found: ID does not exist" Dec 05 19:45:35 crc kubenswrapper[4828]: I1205 19:45:35.911515 4828 scope.go:117] "RemoveContainer" containerID="ce358b3af208203f0cdb1ccebdad9b3c1e9fd8fc5d0fd047fad397a1e540e4e3" Dec 05 19:45:35 crc kubenswrapper[4828]: E1205 19:45:35.911979 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce358b3af208203f0cdb1ccebdad9b3c1e9fd8fc5d0fd047fad397a1e540e4e3\": container with ID starting with ce358b3af208203f0cdb1ccebdad9b3c1e9fd8fc5d0fd047fad397a1e540e4e3 not found: ID does not exist" containerID="ce358b3af208203f0cdb1ccebdad9b3c1e9fd8fc5d0fd047fad397a1e540e4e3" Dec 05 19:45:35 crc kubenswrapper[4828]: I1205 19:45:35.912111 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce358b3af208203f0cdb1ccebdad9b3c1e9fd8fc5d0fd047fad397a1e540e4e3"} err="failed to get container status \"ce358b3af208203f0cdb1ccebdad9b3c1e9fd8fc5d0fd047fad397a1e540e4e3\": rpc error: code = NotFound desc = could not find container \"ce358b3af208203f0cdb1ccebdad9b3c1e9fd8fc5d0fd047fad397a1e540e4e3\": container with ID starting with ce358b3af208203f0cdb1ccebdad9b3c1e9fd8fc5d0fd047fad397a1e540e4e3 not found: ID does not exist" Dec 05 19:45:35 crc kubenswrapper[4828]: I1205 19:45:35.912231 4828 scope.go:117] "RemoveContainer" containerID="bde4a711ba137c529743ba80f510d95bf15b65672ebca060d46991d18cd06964" Dec 05 19:45:35 crc kubenswrapper[4828]: E1205 19:45:35.912682 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bde4a711ba137c529743ba80f510d95bf15b65672ebca060d46991d18cd06964\": container with ID starting with bde4a711ba137c529743ba80f510d95bf15b65672ebca060d46991d18cd06964 not found: ID does not exist" containerID="bde4a711ba137c529743ba80f510d95bf15b65672ebca060d46991d18cd06964" Dec 05 19:45:35 crc kubenswrapper[4828]: I1205 19:45:35.913401 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bde4a711ba137c529743ba80f510d95bf15b65672ebca060d46991d18cd06964"} err="failed to get container status \"bde4a711ba137c529743ba80f510d95bf15b65672ebca060d46991d18cd06964\": rpc error: code = NotFound desc = could not find container \"bde4a711ba137c529743ba80f510d95bf15b65672ebca060d46991d18cd06964\": container with ID starting with bde4a711ba137c529743ba80f510d95bf15b65672ebca060d46991d18cd06964 not found: ID does not exist" Dec 05 19:45:36 crc kubenswrapper[4828]: I1205 19:45:36.446717 4828 scope.go:117] "RemoveContainer" containerID="678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116" Dec 05 19:45:36 crc kubenswrapper[4828]: E1205 19:45:36.447015 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:45:36 crc kubenswrapper[4828]: I1205 19:45:36.460021 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b159a673-7b19-4aca-8725-2918e15c8629" path="/var/lib/kubelet/pods/b159a673-7b19-4aca-8725-2918e15c8629/volumes" Dec 05 19:45:36 crc kubenswrapper[4828]: I1205 19:45:36.794213 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dcn2z" podUID="75e760b7-c8b1-41ad-bc57-2b023d569db1" containerName="registry-server" containerID="cri-o://f944dce21c5c6a0560aa41a5f128277fc7feec26b978753666a3276da59ed392" gracePeriod=2 Dec 05 19:45:37 crc kubenswrapper[4828]: I1205 19:45:37.446775 4828 scope.go:117] "RemoveContainer" containerID="a779f86b0916f1e6e49b4ba52144379301c8e78e68e352632dc079017c30d154" Dec 05 19:45:37 crc kubenswrapper[4828]: E1205 19:45:37.447675 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:45:37 crc kubenswrapper[4828]: I1205 19:45:37.817781 4828 generic.go:334] "Generic (PLEG): container finished" podID="75e760b7-c8b1-41ad-bc57-2b023d569db1" containerID="f944dce21c5c6a0560aa41a5f128277fc7feec26b978753666a3276da59ed392" exitCode=0 Dec 05 19:45:37 crc kubenswrapper[4828]: I1205 19:45:37.817866 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcn2z" event={"ID":"75e760b7-c8b1-41ad-bc57-2b023d569db1","Type":"ContainerDied","Data":"f944dce21c5c6a0560aa41a5f128277fc7feec26b978753666a3276da59ed392"} Dec 05 19:45:38 crc kubenswrapper[4828]: I1205 19:45:38.377149 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dcn2z" Dec 05 19:45:38 crc kubenswrapper[4828]: I1205 19:45:38.467582 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zktsz\" (UniqueName: \"kubernetes.io/projected/75e760b7-c8b1-41ad-bc57-2b023d569db1-kube-api-access-zktsz\") pod \"75e760b7-c8b1-41ad-bc57-2b023d569db1\" (UID: \"75e760b7-c8b1-41ad-bc57-2b023d569db1\") " Dec 05 19:45:38 crc kubenswrapper[4828]: I1205 19:45:38.467730 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75e760b7-c8b1-41ad-bc57-2b023d569db1-utilities\") pod \"75e760b7-c8b1-41ad-bc57-2b023d569db1\" (UID: \"75e760b7-c8b1-41ad-bc57-2b023d569db1\") " Dec 05 19:45:38 crc kubenswrapper[4828]: I1205 19:45:38.467908 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75e760b7-c8b1-41ad-bc57-2b023d569db1-catalog-content\") pod \"75e760b7-c8b1-41ad-bc57-2b023d569db1\" (UID: \"75e760b7-c8b1-41ad-bc57-2b023d569db1\") " Dec 05 19:45:38 crc kubenswrapper[4828]: I1205 19:45:38.468923 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75e760b7-c8b1-41ad-bc57-2b023d569db1-utilities" (OuterVolumeSpecName: "utilities") pod "75e760b7-c8b1-41ad-bc57-2b023d569db1" (UID: "75e760b7-c8b1-41ad-bc57-2b023d569db1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:45:38 crc kubenswrapper[4828]: I1205 19:45:38.473417 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75e760b7-c8b1-41ad-bc57-2b023d569db1-kube-api-access-zktsz" (OuterVolumeSpecName: "kube-api-access-zktsz") pod "75e760b7-c8b1-41ad-bc57-2b023d569db1" (UID: "75e760b7-c8b1-41ad-bc57-2b023d569db1"). InnerVolumeSpecName "kube-api-access-zktsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:45:38 crc kubenswrapper[4828]: I1205 19:45:38.570192 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zktsz\" (UniqueName: \"kubernetes.io/projected/75e760b7-c8b1-41ad-bc57-2b023d569db1-kube-api-access-zktsz\") on node \"crc\" DevicePath \"\"" Dec 05 19:45:38 crc kubenswrapper[4828]: I1205 19:45:38.570227 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75e760b7-c8b1-41ad-bc57-2b023d569db1-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 19:45:38 crc kubenswrapper[4828]: I1205 19:45:38.576703 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75e760b7-c8b1-41ad-bc57-2b023d569db1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75e760b7-c8b1-41ad-bc57-2b023d569db1" (UID: "75e760b7-c8b1-41ad-bc57-2b023d569db1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:45:38 crc kubenswrapper[4828]: I1205 19:45:38.672105 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75e760b7-c8b1-41ad-bc57-2b023d569db1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 19:45:38 crc kubenswrapper[4828]: I1205 19:45:38.840629 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcn2z" event={"ID":"75e760b7-c8b1-41ad-bc57-2b023d569db1","Type":"ContainerDied","Data":"95da673f23e4c14df5fb1a34b9a2ba62c619a6d3a7f8060120a1e869278ec64a"} Dec 05 19:45:38 crc kubenswrapper[4828]: I1205 19:45:38.840695 4828 scope.go:117] "RemoveContainer" containerID="f944dce21c5c6a0560aa41a5f128277fc7feec26b978753666a3276da59ed392" Dec 05 19:45:38 crc kubenswrapper[4828]: I1205 19:45:38.840790 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dcn2z" Dec 05 19:45:38 crc kubenswrapper[4828]: I1205 19:45:38.879809 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dcn2z"] Dec 05 19:45:38 crc kubenswrapper[4828]: I1205 19:45:38.880806 4828 scope.go:117] "RemoveContainer" containerID="18277d9e729cf74f796122c0c4fcfd325265f5fe436bfdeab141b22a2f3390fe" Dec 05 19:45:38 crc kubenswrapper[4828]: I1205 19:45:38.890525 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dcn2z"] Dec 05 19:45:38 crc kubenswrapper[4828]: I1205 19:45:38.915453 4828 scope.go:117] "RemoveContainer" containerID="4c303da4a2ceea26530058e4bcefd72a21a0f2fe7c0142cc8367a44632e69c6b" Dec 05 19:45:40 crc kubenswrapper[4828]: I1205 19:45:40.459163 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75e760b7-c8b1-41ad-bc57-2b023d569db1" path="/var/lib/kubelet/pods/75e760b7-c8b1-41ad-bc57-2b023d569db1/volumes" Dec 05 19:45:47 crc kubenswrapper[4828]: I1205 19:45:47.447045 4828 scope.go:117] "RemoveContainer" containerID="678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116" Dec 05 19:45:47 crc kubenswrapper[4828]: E1205 19:45:47.448023 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:45:52 crc kubenswrapper[4828]: I1205 19:45:52.453915 4828 scope.go:117] "RemoveContainer" containerID="a779f86b0916f1e6e49b4ba52144379301c8e78e68e352632dc079017c30d154" Dec 05 19:45:52 crc kubenswrapper[4828]: E1205 19:45:52.454733 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:45:58 crc kubenswrapper[4828]: I1205 19:45:58.447552 4828 scope.go:117] "RemoveContainer" containerID="678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116" Dec 05 19:45:58 crc kubenswrapper[4828]: E1205 19:45:58.448329 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:46:07 crc kubenswrapper[4828]: I1205 19:46:07.447193 4828 scope.go:117] "RemoveContainer" containerID="a779f86b0916f1e6e49b4ba52144379301c8e78e68e352632dc079017c30d154" Dec 05 19:46:07 crc kubenswrapper[4828]: E1205 19:46:07.448298 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:46:11 crc kubenswrapper[4828]: I1205 19:46:11.446814 4828 scope.go:117] "RemoveContainer" containerID="678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116" Dec 05 19:46:12 crc kubenswrapper[4828]: I1205 19:46:12.170345 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerStarted","Data":"e1e4002a04773f5dc5d6b89fc28afdf2051b7eba9f56198738b63e196e04f834"} Dec 05 19:46:20 crc kubenswrapper[4828]: I1205 19:46:20.446137 4828 scope.go:117] "RemoveContainer" containerID="a779f86b0916f1e6e49b4ba52144379301c8e78e68e352632dc079017c30d154" Dec 05 19:46:20 crc kubenswrapper[4828]: E1205 19:46:20.446893 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:46:32 crc kubenswrapper[4828]: I1205 19:46:32.453460 4828 scope.go:117] "RemoveContainer" containerID="a779f86b0916f1e6e49b4ba52144379301c8e78e68e352632dc079017c30d154" Dec 05 19:46:32 crc kubenswrapper[4828]: E1205 19:46:32.454769 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:46:47 crc kubenswrapper[4828]: I1205 19:46:47.446813 4828 scope.go:117] "RemoveContainer" containerID="a779f86b0916f1e6e49b4ba52144379301c8e78e68e352632dc079017c30d154" Dec 05 19:46:47 crc kubenswrapper[4828]: E1205 19:46:47.448891 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:46:57 crc kubenswrapper[4828]: I1205 19:46:57.629938 4828 generic.go:334] "Generic (PLEG): container finished" podID="b46bef7a-7a08-49f8-a4ff-d6fae6ac588e" containerID="1c739cdac1dff38f17ebe0b786e40b73c570e9ab66bc3dccfcdc15f52b50e48a" exitCode=0 Dec 05 19:46:57 crc kubenswrapper[4828]: I1205 19:46:57.630357 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5" event={"ID":"b46bef7a-7a08-49f8-a4ff-d6fae6ac588e","Type":"ContainerDied","Data":"1c739cdac1dff38f17ebe0b786e40b73c570e9ab66bc3dccfcdc15f52b50e48a"} Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.056903 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.225914 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zc2s\" (UniqueName: \"kubernetes.io/projected/b46bef7a-7a08-49f8-a4ff-d6fae6ac588e-kube-api-access-6zc2s\") pod \"b46bef7a-7a08-49f8-a4ff-d6fae6ac588e\" (UID: \"b46bef7a-7a08-49f8-a4ff-d6fae6ac588e\") " Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.226002 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b46bef7a-7a08-49f8-a4ff-d6fae6ac588e-libvirt-combined-ca-bundle\") pod \"b46bef7a-7a08-49f8-a4ff-d6fae6ac588e\" (UID: \"b46bef7a-7a08-49f8-a4ff-d6fae6ac588e\") " Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.226080 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b46bef7a-7a08-49f8-a4ff-d6fae6ac588e-libvirt-secret-0\") pod \"b46bef7a-7a08-49f8-a4ff-d6fae6ac588e\" (UID: \"b46bef7a-7a08-49f8-a4ff-d6fae6ac588e\") " Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.226126 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b46bef7a-7a08-49f8-a4ff-d6fae6ac588e-ssh-key\") pod \"b46bef7a-7a08-49f8-a4ff-d6fae6ac588e\" (UID: \"b46bef7a-7a08-49f8-a4ff-d6fae6ac588e\") " Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.226384 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b46bef7a-7a08-49f8-a4ff-d6fae6ac588e-inventory\") pod \"b46bef7a-7a08-49f8-a4ff-d6fae6ac588e\" (UID: \"b46bef7a-7a08-49f8-a4ff-d6fae6ac588e\") " Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.233136 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b46bef7a-7a08-49f8-a4ff-d6fae6ac588e-kube-api-access-6zc2s" (OuterVolumeSpecName: "kube-api-access-6zc2s") pod "b46bef7a-7a08-49f8-a4ff-d6fae6ac588e" (UID: "b46bef7a-7a08-49f8-a4ff-d6fae6ac588e"). InnerVolumeSpecName "kube-api-access-6zc2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.233343 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b46bef7a-7a08-49f8-a4ff-d6fae6ac588e-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "b46bef7a-7a08-49f8-a4ff-d6fae6ac588e" (UID: "b46bef7a-7a08-49f8-a4ff-d6fae6ac588e"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.255814 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b46bef7a-7a08-49f8-a4ff-d6fae6ac588e-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "b46bef7a-7a08-49f8-a4ff-d6fae6ac588e" (UID: "b46bef7a-7a08-49f8-a4ff-d6fae6ac588e"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.263335 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b46bef7a-7a08-49f8-a4ff-d6fae6ac588e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b46bef7a-7a08-49f8-a4ff-d6fae6ac588e" (UID: "b46bef7a-7a08-49f8-a4ff-d6fae6ac588e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.265994 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b46bef7a-7a08-49f8-a4ff-d6fae6ac588e-inventory" (OuterVolumeSpecName: "inventory") pod "b46bef7a-7a08-49f8-a4ff-d6fae6ac588e" (UID: "b46bef7a-7a08-49f8-a4ff-d6fae6ac588e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.331172 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b46bef7a-7a08-49f8-a4ff-d6fae6ac588e-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.331362 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zc2s\" (UniqueName: \"kubernetes.io/projected/b46bef7a-7a08-49f8-a4ff-d6fae6ac588e-kube-api-access-6zc2s\") on node \"crc\" DevicePath \"\"" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.331463 4828 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b46bef7a-7a08-49f8-a4ff-d6fae6ac588e-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.331544 4828 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b46bef7a-7a08-49f8-a4ff-d6fae6ac588e-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.331626 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b46bef7a-7a08-49f8-a4ff-d6fae6ac588e-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.658605 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5" event={"ID":"b46bef7a-7a08-49f8-a4ff-d6fae6ac588e","Type":"ContainerDied","Data":"34303be8646d1c338055ffb2dce329e18595f1d5c1a1c8a2f2cbd706a8097e6f"} Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.659034 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34303be8646d1c338055ffb2dce329e18595f1d5c1a1c8a2f2cbd706a8097e6f" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.658704 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.816620 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b"] Dec 05 19:46:59 crc kubenswrapper[4828]: E1205 19:46:59.817369 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b159a673-7b19-4aca-8725-2918e15c8629" containerName="extract-content" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.817468 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="b159a673-7b19-4aca-8725-2918e15c8629" containerName="extract-content" Dec 05 19:46:59 crc kubenswrapper[4828]: E1205 19:46:59.817568 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b46bef7a-7a08-49f8-a4ff-d6fae6ac588e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.817644 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="b46bef7a-7a08-49f8-a4ff-d6fae6ac588e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 05 19:46:59 crc kubenswrapper[4828]: E1205 19:46:59.817731 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75e760b7-c8b1-41ad-bc57-2b023d569db1" containerName="extract-content" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.817860 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="75e760b7-c8b1-41ad-bc57-2b023d569db1" containerName="extract-content" Dec 05 19:46:59 crc kubenswrapper[4828]: E1205 19:46:59.817950 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75e760b7-c8b1-41ad-bc57-2b023d569db1" containerName="extract-utilities" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.818020 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="75e760b7-c8b1-41ad-bc57-2b023d569db1" containerName="extract-utilities" Dec 05 19:46:59 crc kubenswrapper[4828]: E1205 19:46:59.818113 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b159a673-7b19-4aca-8725-2918e15c8629" containerName="extract-utilities" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.818186 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="b159a673-7b19-4aca-8725-2918e15c8629" containerName="extract-utilities" Dec 05 19:46:59 crc kubenswrapper[4828]: E1205 19:46:59.818271 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b159a673-7b19-4aca-8725-2918e15c8629" containerName="registry-server" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.818340 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="b159a673-7b19-4aca-8725-2918e15c8629" containerName="registry-server" Dec 05 19:46:59 crc kubenswrapper[4828]: E1205 19:46:59.818444 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75e760b7-c8b1-41ad-bc57-2b023d569db1" containerName="registry-server" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.818519 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="75e760b7-c8b1-41ad-bc57-2b023d569db1" containerName="registry-server" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.818949 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="75e760b7-c8b1-41ad-bc57-2b023d569db1" containerName="registry-server" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.819097 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="b159a673-7b19-4aca-8725-2918e15c8629" containerName="registry-server" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.819232 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="b46bef7a-7a08-49f8-a4ff-d6fae6ac588e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.820179 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.822785 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9rkjj" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.823584 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.823886 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.824586 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.824623 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.824676 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.827457 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.834441 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b"] Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.953320 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-chk7b\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.953386 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mbkg\" (UniqueName: \"kubernetes.io/projected/b730436a-244c-4d2f-8e29-ca230cfe4921-kube-api-access-8mbkg\") pod \"nova-edpm-deployment-openstack-edpm-ipam-chk7b\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.953404 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-chk7b\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.953440 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-chk7b\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.953532 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-chk7b\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.953556 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-chk7b\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.953590 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-chk7b\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.953618 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-chk7b\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:46:59 crc kubenswrapper[4828]: I1205 19:46:59.953649 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-chk7b\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:47:00 crc kubenswrapper[4828]: I1205 19:47:00.055675 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-chk7b\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:47:00 crc kubenswrapper[4828]: I1205 19:47:00.055743 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-chk7b\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:47:00 crc kubenswrapper[4828]: I1205 19:47:00.055774 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-chk7b\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:47:00 crc kubenswrapper[4828]: I1205 19:47:00.055813 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-chk7b\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:47:00 crc kubenswrapper[4828]: I1205 19:47:00.055874 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-chk7b\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:47:00 crc kubenswrapper[4828]: I1205 19:47:00.055957 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-chk7b\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:47:00 crc kubenswrapper[4828]: I1205 19:47:00.055986 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mbkg\" (UniqueName: \"kubernetes.io/projected/b730436a-244c-4d2f-8e29-ca230cfe4921-kube-api-access-8mbkg\") pod \"nova-edpm-deployment-openstack-edpm-ipam-chk7b\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:47:00 crc kubenswrapper[4828]: I1205 19:47:00.056007 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-chk7b\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:47:00 crc kubenswrapper[4828]: I1205 19:47:00.056055 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-chk7b\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:47:00 crc kubenswrapper[4828]: I1205 19:47:00.057869 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-chk7b\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:47:00 crc kubenswrapper[4828]: I1205 19:47:00.060759 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-chk7b\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:47:00 crc kubenswrapper[4828]: I1205 19:47:00.060790 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-chk7b\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:47:00 crc kubenswrapper[4828]: I1205 19:47:00.061064 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-chk7b\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:47:00 crc kubenswrapper[4828]: I1205 19:47:00.061173 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-chk7b\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:47:00 crc kubenswrapper[4828]: I1205 19:47:00.062676 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-chk7b\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:47:00 crc kubenswrapper[4828]: I1205 19:47:00.068768 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-chk7b\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:47:00 crc kubenswrapper[4828]: I1205 19:47:00.070621 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-chk7b\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:47:00 crc kubenswrapper[4828]: I1205 19:47:00.075922 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mbkg\" (UniqueName: \"kubernetes.io/projected/b730436a-244c-4d2f-8e29-ca230cfe4921-kube-api-access-8mbkg\") pod \"nova-edpm-deployment-openstack-edpm-ipam-chk7b\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:47:00 crc kubenswrapper[4828]: I1205 19:47:00.155888 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:47:00 crc kubenswrapper[4828]: I1205 19:47:00.446487 4828 scope.go:117] "RemoveContainer" containerID="a779f86b0916f1e6e49b4ba52144379301c8e78e68e352632dc079017c30d154" Dec 05 19:47:00 crc kubenswrapper[4828]: E1205 19:47:00.447093 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:47:00 crc kubenswrapper[4828]: I1205 19:47:00.652576 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b"] Dec 05 19:47:00 crc kubenswrapper[4828]: I1205 19:47:00.666996 4828 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 19:47:01 crc kubenswrapper[4828]: I1205 19:47:01.680891 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" event={"ID":"b730436a-244c-4d2f-8e29-ca230cfe4921","Type":"ContainerStarted","Data":"fb041c34e39f5043084792ccef9f956a2038592562183c6d8335dd891a9dcc11"} Dec 05 19:47:01 crc kubenswrapper[4828]: I1205 19:47:01.681624 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" event={"ID":"b730436a-244c-4d2f-8e29-ca230cfe4921","Type":"ContainerStarted","Data":"6deff7595bd21d560f49f1d03a5740633867045237796add05f7f189d3394f0a"} Dec 05 19:47:01 crc kubenswrapper[4828]: I1205 19:47:01.711704 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" podStartSLOduration=2.282168214 podStartE2EDuration="2.711680749s" podCreationTimestamp="2025-12-05 19:46:59 +0000 UTC" firstStartedPulling="2025-12-05 19:47:00.666412959 +0000 UTC m=+2598.561635255" lastFinishedPulling="2025-12-05 19:47:01.095925464 +0000 UTC m=+2598.991147790" observedRunningTime="2025-12-05 19:47:01.704014922 +0000 UTC m=+2599.599237278" watchObservedRunningTime="2025-12-05 19:47:01.711680749 +0000 UTC m=+2599.606903055" Dec 05 19:47:11 crc kubenswrapper[4828]: I1205 19:47:11.447210 4828 scope.go:117] "RemoveContainer" containerID="a779f86b0916f1e6e49b4ba52144379301c8e78e68e352632dc079017c30d154" Dec 05 19:47:11 crc kubenswrapper[4828]: E1205 19:47:11.447988 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:47:22 crc kubenswrapper[4828]: I1205 19:47:22.453059 4828 scope.go:117] "RemoveContainer" containerID="a779f86b0916f1e6e49b4ba52144379301c8e78e68e352632dc079017c30d154" Dec 05 19:47:22 crc kubenswrapper[4828]: E1205 19:47:22.454052 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:47:37 crc kubenswrapper[4828]: I1205 19:47:37.447178 4828 scope.go:117] "RemoveContainer" containerID="a779f86b0916f1e6e49b4ba52144379301c8e78e68e352632dc079017c30d154" Dec 05 19:47:37 crc kubenswrapper[4828]: E1205 19:47:37.448301 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:47:50 crc kubenswrapper[4828]: I1205 19:47:50.446916 4828 scope.go:117] "RemoveContainer" containerID="a779f86b0916f1e6e49b4ba52144379301c8e78e68e352632dc079017c30d154" Dec 05 19:47:50 crc kubenswrapper[4828]: E1205 19:47:50.447750 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:48:03 crc kubenswrapper[4828]: I1205 19:48:03.447115 4828 scope.go:117] "RemoveContainer" containerID="a779f86b0916f1e6e49b4ba52144379301c8e78e68e352632dc079017c30d154" Dec 05 19:48:03 crc kubenswrapper[4828]: E1205 19:48:03.447863 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:48:15 crc kubenswrapper[4828]: I1205 19:48:15.447032 4828 scope.go:117] "RemoveContainer" containerID="a779f86b0916f1e6e49b4ba52144379301c8e78e68e352632dc079017c30d154" Dec 05 19:48:15 crc kubenswrapper[4828]: E1205 19:48:15.447910 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:48:27 crc kubenswrapper[4828]: I1205 19:48:27.446870 4828 scope.go:117] "RemoveContainer" containerID="a779f86b0916f1e6e49b4ba52144379301c8e78e68e352632dc079017c30d154" Dec 05 19:48:27 crc kubenswrapper[4828]: E1205 19:48:27.448491 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:48:35 crc kubenswrapper[4828]: I1205 19:48:35.259776 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:48:35 crc kubenswrapper[4828]: I1205 19:48:35.260331 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:48:39 crc kubenswrapper[4828]: I1205 19:48:39.446645 4828 scope.go:117] "RemoveContainer" containerID="a779f86b0916f1e6e49b4ba52144379301c8e78e68e352632dc079017c30d154" Dec 05 19:48:39 crc kubenswrapper[4828]: E1205 19:48:39.447379 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:48:51 crc kubenswrapper[4828]: I1205 19:48:51.446755 4828 scope.go:117] "RemoveContainer" containerID="a779f86b0916f1e6e49b4ba52144379301c8e78e68e352632dc079017c30d154" Dec 05 19:48:51 crc kubenswrapper[4828]: E1205 19:48:51.447534 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:49:03 crc kubenswrapper[4828]: I1205 19:49:03.446350 4828 scope.go:117] "RemoveContainer" containerID="a779f86b0916f1e6e49b4ba52144379301c8e78e68e352632dc079017c30d154" Dec 05 19:49:03 crc kubenswrapper[4828]: E1205 19:49:03.447485 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:49:05 crc kubenswrapper[4828]: I1205 19:49:05.259988 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:49:05 crc kubenswrapper[4828]: I1205 19:49:05.260396 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:49:15 crc kubenswrapper[4828]: I1205 19:49:15.446673 4828 scope.go:117] "RemoveContainer" containerID="a779f86b0916f1e6e49b4ba52144379301c8e78e68e352632dc079017c30d154" Dec 05 19:49:15 crc kubenswrapper[4828]: E1205 19:49:15.447328 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:49:27 crc kubenswrapper[4828]: I1205 19:49:27.446416 4828 scope.go:117] "RemoveContainer" containerID="a779f86b0916f1e6e49b4ba52144379301c8e78e68e352632dc079017c30d154" Dec 05 19:49:28 crc kubenswrapper[4828]: I1205 19:49:28.130052 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" event={"ID":"03c4fc5d-6be1-47b4-9c39-7bb86046dafd","Type":"ContainerStarted","Data":"e9977cd846559568215f863e86eb79e64661995a673b7472e5d29d74ccee6b8c"} Dec 05 19:49:28 crc kubenswrapper[4828]: I1205 19:49:28.132343 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:49:35 crc kubenswrapper[4828]: I1205 19:49:35.124450 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:49:35 crc kubenswrapper[4828]: I1205 19:49:35.259468 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:49:35 crc kubenswrapper[4828]: I1205 19:49:35.259532 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:49:35 crc kubenswrapper[4828]: I1205 19:49:35.259579 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" Dec 05 19:49:35 crc kubenswrapper[4828]: I1205 19:49:35.260421 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e1e4002a04773f5dc5d6b89fc28afdf2051b7eba9f56198738b63e196e04f834"} pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 19:49:35 crc kubenswrapper[4828]: I1205 19:49:35.260478 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" containerID="cri-o://e1e4002a04773f5dc5d6b89fc28afdf2051b7eba9f56198738b63e196e04f834" gracePeriod=600 Dec 05 19:49:36 crc kubenswrapper[4828]: I1205 19:49:36.199869 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerDied","Data":"e1e4002a04773f5dc5d6b89fc28afdf2051b7eba9f56198738b63e196e04f834"} Dec 05 19:49:36 crc kubenswrapper[4828]: I1205 19:49:36.200367 4828 scope.go:117] "RemoveContainer" containerID="678839015b1736f45ddc22cc2d08a80169ce131f2bafbe2a218b9ea78153b116" Dec 05 19:49:36 crc kubenswrapper[4828]: I1205 19:49:36.199871 4828 generic.go:334] "Generic (PLEG): container finished" podID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerID="e1e4002a04773f5dc5d6b89fc28afdf2051b7eba9f56198738b63e196e04f834" exitCode=0 Dec 05 19:49:36 crc kubenswrapper[4828]: I1205 19:49:36.200436 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerStarted","Data":"c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5"} Dec 05 19:49:47 crc kubenswrapper[4828]: I1205 19:49:47.311788 4828 generic.go:334] "Generic (PLEG): container finished" podID="b730436a-244c-4d2f-8e29-ca230cfe4921" containerID="fb041c34e39f5043084792ccef9f956a2038592562183c6d8335dd891a9dcc11" exitCode=0 Dec 05 19:49:47 crc kubenswrapper[4828]: I1205 19:49:47.311876 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" event={"ID":"b730436a-244c-4d2f-8e29-ca230cfe4921","Type":"ContainerDied","Data":"fb041c34e39f5043084792ccef9f956a2038592562183c6d8335dd891a9dcc11"} Dec 05 19:49:48 crc kubenswrapper[4828]: I1205 19:49:48.749987 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:49:48 crc kubenswrapper[4828]: I1205 19:49:48.836332 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-inventory\") pod \"b730436a-244c-4d2f-8e29-ca230cfe4921\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " Dec 05 19:49:48 crc kubenswrapper[4828]: I1205 19:49:48.836410 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-migration-ssh-key-0\") pod \"b730436a-244c-4d2f-8e29-ca230cfe4921\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " Dec 05 19:49:48 crc kubenswrapper[4828]: I1205 19:49:48.836432 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-combined-ca-bundle\") pod \"b730436a-244c-4d2f-8e29-ca230cfe4921\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " Dec 05 19:49:48 crc kubenswrapper[4828]: I1205 19:49:48.836505 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-extra-config-0\") pod \"b730436a-244c-4d2f-8e29-ca230cfe4921\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " Dec 05 19:49:48 crc kubenswrapper[4828]: I1205 19:49:48.837255 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-cell1-compute-config-1\") pod \"b730436a-244c-4d2f-8e29-ca230cfe4921\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " Dec 05 19:49:48 crc kubenswrapper[4828]: I1205 19:49:48.837348 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mbkg\" (UniqueName: \"kubernetes.io/projected/b730436a-244c-4d2f-8e29-ca230cfe4921-kube-api-access-8mbkg\") pod \"b730436a-244c-4d2f-8e29-ca230cfe4921\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " Dec 05 19:49:48 crc kubenswrapper[4828]: I1205 19:49:48.837375 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-cell1-compute-config-0\") pod \"b730436a-244c-4d2f-8e29-ca230cfe4921\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " Dec 05 19:49:48 crc kubenswrapper[4828]: I1205 19:49:48.837419 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-migration-ssh-key-1\") pod \"b730436a-244c-4d2f-8e29-ca230cfe4921\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " Dec 05 19:49:48 crc kubenswrapper[4828]: I1205 19:49:48.837473 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-ssh-key\") pod \"b730436a-244c-4d2f-8e29-ca230cfe4921\" (UID: \"b730436a-244c-4d2f-8e29-ca230cfe4921\") " Dec 05 19:49:48 crc kubenswrapper[4828]: I1205 19:49:48.844246 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "b730436a-244c-4d2f-8e29-ca230cfe4921" (UID: "b730436a-244c-4d2f-8e29-ca230cfe4921"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:49:48 crc kubenswrapper[4828]: I1205 19:49:48.854120 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b730436a-244c-4d2f-8e29-ca230cfe4921-kube-api-access-8mbkg" (OuterVolumeSpecName: "kube-api-access-8mbkg") pod "b730436a-244c-4d2f-8e29-ca230cfe4921" (UID: "b730436a-244c-4d2f-8e29-ca230cfe4921"). InnerVolumeSpecName "kube-api-access-8mbkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:49:48 crc kubenswrapper[4828]: I1205 19:49:48.869125 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "b730436a-244c-4d2f-8e29-ca230cfe4921" (UID: "b730436a-244c-4d2f-8e29-ca230cfe4921"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:49:48 crc kubenswrapper[4828]: I1205 19:49:48.873717 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "b730436a-244c-4d2f-8e29-ca230cfe4921" (UID: "b730436a-244c-4d2f-8e29-ca230cfe4921"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:49:48 crc kubenswrapper[4828]: I1205 19:49:48.875009 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "b730436a-244c-4d2f-8e29-ca230cfe4921" (UID: "b730436a-244c-4d2f-8e29-ca230cfe4921"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:49:48 crc kubenswrapper[4828]: I1205 19:49:48.876308 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-inventory" (OuterVolumeSpecName: "inventory") pod "b730436a-244c-4d2f-8e29-ca230cfe4921" (UID: "b730436a-244c-4d2f-8e29-ca230cfe4921"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:49:48 crc kubenswrapper[4828]: I1205 19:49:48.878326 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "b730436a-244c-4d2f-8e29-ca230cfe4921" (UID: "b730436a-244c-4d2f-8e29-ca230cfe4921"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 19:49:48 crc kubenswrapper[4828]: I1205 19:49:48.879665 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b730436a-244c-4d2f-8e29-ca230cfe4921" (UID: "b730436a-244c-4d2f-8e29-ca230cfe4921"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:49:48 crc kubenswrapper[4828]: I1205 19:49:48.884627 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "b730436a-244c-4d2f-8e29-ca230cfe4921" (UID: "b730436a-244c-4d2f-8e29-ca230cfe4921"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:49:48 crc kubenswrapper[4828]: I1205 19:49:48.942882 4828 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 05 19:49:48 crc kubenswrapper[4828]: I1205 19:49:48.942926 4828 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:49:48 crc kubenswrapper[4828]: I1205 19:49:48.942941 4828 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Dec 05 19:49:48 crc kubenswrapper[4828]: I1205 19:49:48.942955 4828 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 05 19:49:48 crc kubenswrapper[4828]: I1205 19:49:48.942969 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mbkg\" (UniqueName: \"kubernetes.io/projected/b730436a-244c-4d2f-8e29-ca230cfe4921-kube-api-access-8mbkg\") on node \"crc\" DevicePath \"\"" Dec 05 19:49:48 crc kubenswrapper[4828]: I1205 19:49:48.942981 4828 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 05 19:49:48 crc kubenswrapper[4828]: I1205 19:49:48.942993 4828 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 05 19:49:48 crc kubenswrapper[4828]: I1205 19:49:48.943005 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 19:49:48 crc kubenswrapper[4828]: I1205 19:49:48.943016 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b730436a-244c-4d2f-8e29-ca230cfe4921-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.330785 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" event={"ID":"b730436a-244c-4d2f-8e29-ca230cfe4921","Type":"ContainerDied","Data":"6deff7595bd21d560f49f1d03a5740633867045237796add05f7f189d3394f0a"} Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.330847 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6deff7595bd21d560f49f1d03a5740633867045237796add05f7f189d3394f0a" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.330880 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-chk7b" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.520194 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b"] Dec 05 19:49:49 crc kubenswrapper[4828]: E1205 19:49:49.520868 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b730436a-244c-4d2f-8e29-ca230cfe4921" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.520900 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="b730436a-244c-4d2f-8e29-ca230cfe4921" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.521238 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="b730436a-244c-4d2f-8e29-ca230cfe4921" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.522278 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.524525 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.524548 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.524672 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.524709 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.525698 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9rkjj" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.531225 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b"] Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.655794 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wszg6\" (UniqueName: \"kubernetes.io/projected/e0dde2a7-439b-4b5a-8e4b-363089a9879a-kube-api-access-wszg6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b\" (UID: \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.656134 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b\" (UID: \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.656162 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b\" (UID: \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.656185 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b\" (UID: \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.656220 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b\" (UID: \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.656390 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b\" (UID: \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.656449 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b\" (UID: \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.757698 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b\" (UID: \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.757745 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b\" (UID: \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.757774 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b\" (UID: \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.757810 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b\" (UID: \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.757834 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b\" (UID: \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.757984 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b\" (UID: \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.758060 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wszg6\" (UniqueName: \"kubernetes.io/projected/e0dde2a7-439b-4b5a-8e4b-363089a9879a-kube-api-access-wszg6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b\" (UID: \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.762357 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b\" (UID: \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.763460 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b\" (UID: \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.765253 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b\" (UID: \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.773672 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b\" (UID: \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.775177 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b\" (UID: \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.776444 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wszg6\" (UniqueName: \"kubernetes.io/projected/e0dde2a7-439b-4b5a-8e4b-363089a9879a-kube-api-access-wszg6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b\" (UID: \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.778613 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b\" (UID: \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b" Dec 05 19:49:49 crc kubenswrapper[4828]: I1205 19:49:49.839370 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b" Dec 05 19:49:50 crc kubenswrapper[4828]: I1205 19:49:50.378508 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b"] Dec 05 19:49:51 crc kubenswrapper[4828]: I1205 19:49:51.349496 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b" event={"ID":"e0dde2a7-439b-4b5a-8e4b-363089a9879a","Type":"ContainerStarted","Data":"da2741f8c220741823b79cfa4c281e0f67ec84d335fc4877bdcd4a3e069948e0"} Dec 05 19:49:51 crc kubenswrapper[4828]: I1205 19:49:51.350135 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b" event={"ID":"e0dde2a7-439b-4b5a-8e4b-363089a9879a","Type":"ContainerStarted","Data":"2969ae0eced6fc9ca36dfaceec888652253c4ea8aa7bac13c236780d12f9812b"} Dec 05 19:49:51 crc kubenswrapper[4828]: I1205 19:49:51.388663 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b" podStartSLOduration=1.9453947440000001 podStartE2EDuration="2.388645409s" podCreationTimestamp="2025-12-05 19:49:49 +0000 UTC" firstStartedPulling="2025-12-05 19:49:50.380937835 +0000 UTC m=+2768.276160141" lastFinishedPulling="2025-12-05 19:49:50.82418849 +0000 UTC m=+2768.719410806" observedRunningTime="2025-12-05 19:49:51.388148396 +0000 UTC m=+2769.283370702" watchObservedRunningTime="2025-12-05 19:49:51.388645409 +0000 UTC m=+2769.283867715" Dec 05 19:51:23 crc kubenswrapper[4828]: I1205 19:51:23.366278 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nk9vq"] Dec 05 19:51:23 crc kubenswrapper[4828]: I1205 19:51:23.368763 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nk9vq" Dec 05 19:51:23 crc kubenswrapper[4828]: I1205 19:51:23.397216 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3-catalog-content\") pod \"certified-operators-nk9vq\" (UID: \"9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3\") " pod="openshift-marketplace/certified-operators-nk9vq" Dec 05 19:51:23 crc kubenswrapper[4828]: I1205 19:51:23.397302 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3-utilities\") pod \"certified-operators-nk9vq\" (UID: \"9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3\") " pod="openshift-marketplace/certified-operators-nk9vq" Dec 05 19:51:23 crc kubenswrapper[4828]: I1205 19:51:23.397604 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvs5j\" (UniqueName: \"kubernetes.io/projected/9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3-kube-api-access-vvs5j\") pod \"certified-operators-nk9vq\" (UID: \"9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3\") " pod="openshift-marketplace/certified-operators-nk9vq" Dec 05 19:51:23 crc kubenswrapper[4828]: I1205 19:51:23.402007 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nk9vq"] Dec 05 19:51:23 crc kubenswrapper[4828]: I1205 19:51:23.499620 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvs5j\" (UniqueName: \"kubernetes.io/projected/9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3-kube-api-access-vvs5j\") pod \"certified-operators-nk9vq\" (UID: \"9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3\") " pod="openshift-marketplace/certified-operators-nk9vq" Dec 05 19:51:23 crc kubenswrapper[4828]: I1205 19:51:23.499728 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3-catalog-content\") pod \"certified-operators-nk9vq\" (UID: \"9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3\") " pod="openshift-marketplace/certified-operators-nk9vq" Dec 05 19:51:23 crc kubenswrapper[4828]: I1205 19:51:23.499761 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3-utilities\") pod \"certified-operators-nk9vq\" (UID: \"9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3\") " pod="openshift-marketplace/certified-operators-nk9vq" Dec 05 19:51:23 crc kubenswrapper[4828]: I1205 19:51:23.500278 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3-utilities\") pod \"certified-operators-nk9vq\" (UID: \"9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3\") " pod="openshift-marketplace/certified-operators-nk9vq" Dec 05 19:51:23 crc kubenswrapper[4828]: I1205 19:51:23.500317 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3-catalog-content\") pod \"certified-operators-nk9vq\" (UID: \"9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3\") " pod="openshift-marketplace/certified-operators-nk9vq" Dec 05 19:51:23 crc kubenswrapper[4828]: I1205 19:51:23.517775 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvs5j\" (UniqueName: \"kubernetes.io/projected/9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3-kube-api-access-vvs5j\") pod \"certified-operators-nk9vq\" (UID: \"9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3\") " pod="openshift-marketplace/certified-operators-nk9vq" Dec 05 19:51:23 crc kubenswrapper[4828]: I1205 19:51:23.712922 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nk9vq" Dec 05 19:51:24 crc kubenswrapper[4828]: I1205 19:51:24.229763 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nk9vq"] Dec 05 19:51:24 crc kubenswrapper[4828]: I1205 19:51:24.283432 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nk9vq" event={"ID":"9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3","Type":"ContainerStarted","Data":"b29a5bced503d9c7d39226fc58e7450b15100524e34e2ebd60f85204cde8ad27"} Dec 05 19:51:25 crc kubenswrapper[4828]: I1205 19:51:25.294695 4828 generic.go:334] "Generic (PLEG): container finished" podID="9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3" containerID="bb81aedd5b786a22e090c6035c6de9c7f924142be7b1fd137da50f9febc73a09" exitCode=0 Dec 05 19:51:25 crc kubenswrapper[4828]: I1205 19:51:25.294760 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nk9vq" event={"ID":"9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3","Type":"ContainerDied","Data":"bb81aedd5b786a22e090c6035c6de9c7f924142be7b1fd137da50f9febc73a09"} Dec 05 19:51:27 crc kubenswrapper[4828]: I1205 19:51:27.314286 4828 generic.go:334] "Generic (PLEG): container finished" podID="9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3" containerID="62e8a810de8f63f821ae5d0d426c2840a1acd6f7f91c4eb255bcb39c6ee4f1d4" exitCode=0 Dec 05 19:51:27 crc kubenswrapper[4828]: I1205 19:51:27.314381 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nk9vq" event={"ID":"9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3","Type":"ContainerDied","Data":"62e8a810de8f63f821ae5d0d426c2840a1acd6f7f91c4eb255bcb39c6ee4f1d4"} Dec 05 19:51:28 crc kubenswrapper[4828]: I1205 19:51:28.325106 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nk9vq" event={"ID":"9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3","Type":"ContainerStarted","Data":"b34dbbc6dd173135177fd222b4a46c1bd38c69d3319f33bebf38689bf4149adf"} Dec 05 19:51:28 crc kubenswrapper[4828]: I1205 19:51:28.345364 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nk9vq" podStartSLOduration=2.911738753 podStartE2EDuration="5.345333921s" podCreationTimestamp="2025-12-05 19:51:23 +0000 UTC" firstStartedPulling="2025-12-05 19:51:25.296660415 +0000 UTC m=+2863.191882721" lastFinishedPulling="2025-12-05 19:51:27.730255583 +0000 UTC m=+2865.625477889" observedRunningTime="2025-12-05 19:51:28.340224502 +0000 UTC m=+2866.235446838" watchObservedRunningTime="2025-12-05 19:51:28.345333921 +0000 UTC m=+2866.240556227" Dec 05 19:51:33 crc kubenswrapper[4828]: I1205 19:51:33.713204 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nk9vq" Dec 05 19:51:33 crc kubenswrapper[4828]: I1205 19:51:33.713758 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nk9vq" Dec 05 19:51:33 crc kubenswrapper[4828]: I1205 19:51:33.785375 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nk9vq" Dec 05 19:51:34 crc kubenswrapper[4828]: I1205 19:51:34.437638 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nk9vq" Dec 05 19:51:34 crc kubenswrapper[4828]: I1205 19:51:34.488955 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nk9vq"] Dec 05 19:51:35 crc kubenswrapper[4828]: I1205 19:51:35.259863 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:51:35 crc kubenswrapper[4828]: I1205 19:51:35.259952 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:51:36 crc kubenswrapper[4828]: I1205 19:51:36.393757 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nk9vq" podUID="9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3" containerName="registry-server" containerID="cri-o://b34dbbc6dd173135177fd222b4a46c1bd38c69d3319f33bebf38689bf4149adf" gracePeriod=2 Dec 05 19:51:36 crc kubenswrapper[4828]: I1205 19:51:36.873408 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nk9vq" Dec 05 19:51:36 crc kubenswrapper[4828]: I1205 19:51:36.966099 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3-catalog-content\") pod \"9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3\" (UID: \"9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3\") " Dec 05 19:51:36 crc kubenswrapper[4828]: I1205 19:51:36.966315 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3-utilities\") pod \"9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3\" (UID: \"9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3\") " Dec 05 19:51:36 crc kubenswrapper[4828]: I1205 19:51:36.966345 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvs5j\" (UniqueName: \"kubernetes.io/projected/9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3-kube-api-access-vvs5j\") pod \"9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3\" (UID: \"9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3\") " Dec 05 19:51:36 crc kubenswrapper[4828]: I1205 19:51:36.967304 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3-utilities" (OuterVolumeSpecName: "utilities") pod "9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3" (UID: "9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:51:36 crc kubenswrapper[4828]: I1205 19:51:36.972726 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3-kube-api-access-vvs5j" (OuterVolumeSpecName: "kube-api-access-vvs5j") pod "9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3" (UID: "9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3"). InnerVolumeSpecName "kube-api-access-vvs5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:51:37 crc kubenswrapper[4828]: I1205 19:51:37.020440 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3" (UID: "9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:51:37 crc kubenswrapper[4828]: I1205 19:51:37.068914 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 19:51:37 crc kubenswrapper[4828]: I1205 19:51:37.068944 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 19:51:37 crc kubenswrapper[4828]: I1205 19:51:37.068954 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvs5j\" (UniqueName: \"kubernetes.io/projected/9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3-kube-api-access-vvs5j\") on node \"crc\" DevicePath \"\"" Dec 05 19:51:37 crc kubenswrapper[4828]: I1205 19:51:37.404789 4828 generic.go:334] "Generic (PLEG): container finished" podID="9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3" containerID="b34dbbc6dd173135177fd222b4a46c1bd38c69d3319f33bebf38689bf4149adf" exitCode=0 Dec 05 19:51:37 crc kubenswrapper[4828]: I1205 19:51:37.404861 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nk9vq" event={"ID":"9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3","Type":"ContainerDied","Data":"b34dbbc6dd173135177fd222b4a46c1bd38c69d3319f33bebf38689bf4149adf"} Dec 05 19:51:37 crc kubenswrapper[4828]: I1205 19:51:37.404922 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nk9vq" event={"ID":"9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3","Type":"ContainerDied","Data":"b29a5bced503d9c7d39226fc58e7450b15100524e34e2ebd60f85204cde8ad27"} Dec 05 19:51:37 crc kubenswrapper[4828]: I1205 19:51:37.404944 4828 scope.go:117] "RemoveContainer" containerID="b34dbbc6dd173135177fd222b4a46c1bd38c69d3319f33bebf38689bf4149adf" Dec 05 19:51:37 crc kubenswrapper[4828]: I1205 19:51:37.404882 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nk9vq" Dec 05 19:51:37 crc kubenswrapper[4828]: I1205 19:51:37.432063 4828 scope.go:117] "RemoveContainer" containerID="62e8a810de8f63f821ae5d0d426c2840a1acd6f7f91c4eb255bcb39c6ee4f1d4" Dec 05 19:51:37 crc kubenswrapper[4828]: I1205 19:51:37.456439 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nk9vq"] Dec 05 19:51:37 crc kubenswrapper[4828]: I1205 19:51:37.476699 4828 scope.go:117] "RemoveContainer" containerID="bb81aedd5b786a22e090c6035c6de9c7f924142be7b1fd137da50f9febc73a09" Dec 05 19:51:37 crc kubenswrapper[4828]: I1205 19:51:37.478520 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nk9vq"] Dec 05 19:51:37 crc kubenswrapper[4828]: I1205 19:51:37.504056 4828 scope.go:117] "RemoveContainer" containerID="b34dbbc6dd173135177fd222b4a46c1bd38c69d3319f33bebf38689bf4149adf" Dec 05 19:51:37 crc kubenswrapper[4828]: E1205 19:51:37.504560 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b34dbbc6dd173135177fd222b4a46c1bd38c69d3319f33bebf38689bf4149adf\": container with ID starting with b34dbbc6dd173135177fd222b4a46c1bd38c69d3319f33bebf38689bf4149adf not found: ID does not exist" containerID="b34dbbc6dd173135177fd222b4a46c1bd38c69d3319f33bebf38689bf4149adf" Dec 05 19:51:37 crc kubenswrapper[4828]: I1205 19:51:37.504600 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b34dbbc6dd173135177fd222b4a46c1bd38c69d3319f33bebf38689bf4149adf"} err="failed to get container status \"b34dbbc6dd173135177fd222b4a46c1bd38c69d3319f33bebf38689bf4149adf\": rpc error: code = NotFound desc = could not find container \"b34dbbc6dd173135177fd222b4a46c1bd38c69d3319f33bebf38689bf4149adf\": container with ID starting with b34dbbc6dd173135177fd222b4a46c1bd38c69d3319f33bebf38689bf4149adf not found: ID does not exist" Dec 05 19:51:37 crc kubenswrapper[4828]: I1205 19:51:37.504625 4828 scope.go:117] "RemoveContainer" containerID="62e8a810de8f63f821ae5d0d426c2840a1acd6f7f91c4eb255bcb39c6ee4f1d4" Dec 05 19:51:37 crc kubenswrapper[4828]: E1205 19:51:37.505332 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62e8a810de8f63f821ae5d0d426c2840a1acd6f7f91c4eb255bcb39c6ee4f1d4\": container with ID starting with 62e8a810de8f63f821ae5d0d426c2840a1acd6f7f91c4eb255bcb39c6ee4f1d4 not found: ID does not exist" containerID="62e8a810de8f63f821ae5d0d426c2840a1acd6f7f91c4eb255bcb39c6ee4f1d4" Dec 05 19:51:37 crc kubenswrapper[4828]: I1205 19:51:37.505880 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62e8a810de8f63f821ae5d0d426c2840a1acd6f7f91c4eb255bcb39c6ee4f1d4"} err="failed to get container status \"62e8a810de8f63f821ae5d0d426c2840a1acd6f7f91c4eb255bcb39c6ee4f1d4\": rpc error: code = NotFound desc = could not find container \"62e8a810de8f63f821ae5d0d426c2840a1acd6f7f91c4eb255bcb39c6ee4f1d4\": container with ID starting with 62e8a810de8f63f821ae5d0d426c2840a1acd6f7f91c4eb255bcb39c6ee4f1d4 not found: ID does not exist" Dec 05 19:51:37 crc kubenswrapper[4828]: I1205 19:51:37.506102 4828 scope.go:117] "RemoveContainer" containerID="bb81aedd5b786a22e090c6035c6de9c7f924142be7b1fd137da50f9febc73a09" Dec 05 19:51:37 crc kubenswrapper[4828]: E1205 19:51:37.506507 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb81aedd5b786a22e090c6035c6de9c7f924142be7b1fd137da50f9febc73a09\": container with ID starting with bb81aedd5b786a22e090c6035c6de9c7f924142be7b1fd137da50f9febc73a09 not found: ID does not exist" containerID="bb81aedd5b786a22e090c6035c6de9c7f924142be7b1fd137da50f9febc73a09" Dec 05 19:51:37 crc kubenswrapper[4828]: I1205 19:51:37.506534 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb81aedd5b786a22e090c6035c6de9c7f924142be7b1fd137da50f9febc73a09"} err="failed to get container status \"bb81aedd5b786a22e090c6035c6de9c7f924142be7b1fd137da50f9febc73a09\": rpc error: code = NotFound desc = could not find container \"bb81aedd5b786a22e090c6035c6de9c7f924142be7b1fd137da50f9febc73a09\": container with ID starting with bb81aedd5b786a22e090c6035c6de9c7f924142be7b1fd137da50f9febc73a09 not found: ID does not exist" Dec 05 19:51:38 crc kubenswrapper[4828]: I1205 19:51:38.456903 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3" path="/var/lib/kubelet/pods/9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3/volumes" Dec 05 19:51:41 crc kubenswrapper[4828]: I1205 19:51:41.028280 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x6c2j"] Dec 05 19:51:41 crc kubenswrapper[4828]: E1205 19:51:41.029117 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3" containerName="extract-content" Dec 05 19:51:41 crc kubenswrapper[4828]: I1205 19:51:41.029134 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3" containerName="extract-content" Dec 05 19:51:41 crc kubenswrapper[4828]: E1205 19:51:41.029154 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3" containerName="extract-utilities" Dec 05 19:51:41 crc kubenswrapper[4828]: I1205 19:51:41.029162 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3" containerName="extract-utilities" Dec 05 19:51:41 crc kubenswrapper[4828]: E1205 19:51:41.029189 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3" containerName="registry-server" Dec 05 19:51:41 crc kubenswrapper[4828]: I1205 19:51:41.029197 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3" containerName="registry-server" Dec 05 19:51:41 crc kubenswrapper[4828]: I1205 19:51:41.029449 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b12ef51-43c8-4a59-bf4e-7ea6dbad67d3" containerName="registry-server" Dec 05 19:51:41 crc kubenswrapper[4828]: I1205 19:51:41.033501 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x6c2j" Dec 05 19:51:41 crc kubenswrapper[4828]: I1205 19:51:41.042075 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43692314-f749-4fd5-ac51-076c2a51301f-utilities\") pod \"community-operators-x6c2j\" (UID: \"43692314-f749-4fd5-ac51-076c2a51301f\") " pod="openshift-marketplace/community-operators-x6c2j" Dec 05 19:51:41 crc kubenswrapper[4828]: I1205 19:51:41.042347 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43692314-f749-4fd5-ac51-076c2a51301f-catalog-content\") pod \"community-operators-x6c2j\" (UID: \"43692314-f749-4fd5-ac51-076c2a51301f\") " pod="openshift-marketplace/community-operators-x6c2j" Dec 05 19:51:41 crc kubenswrapper[4828]: I1205 19:51:41.042523 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzzqq\" (UniqueName: \"kubernetes.io/projected/43692314-f749-4fd5-ac51-076c2a51301f-kube-api-access-lzzqq\") pod \"community-operators-x6c2j\" (UID: \"43692314-f749-4fd5-ac51-076c2a51301f\") " pod="openshift-marketplace/community-operators-x6c2j" Dec 05 19:51:41 crc kubenswrapper[4828]: I1205 19:51:41.050082 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x6c2j"] Dec 05 19:51:41 crc kubenswrapper[4828]: I1205 19:51:41.144348 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43692314-f749-4fd5-ac51-076c2a51301f-catalog-content\") pod \"community-operators-x6c2j\" (UID: \"43692314-f749-4fd5-ac51-076c2a51301f\") " pod="openshift-marketplace/community-operators-x6c2j" Dec 05 19:51:41 crc kubenswrapper[4828]: I1205 19:51:41.144445 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzzqq\" (UniqueName: \"kubernetes.io/projected/43692314-f749-4fd5-ac51-076c2a51301f-kube-api-access-lzzqq\") pod \"community-operators-x6c2j\" (UID: \"43692314-f749-4fd5-ac51-076c2a51301f\") " pod="openshift-marketplace/community-operators-x6c2j" Dec 05 19:51:41 crc kubenswrapper[4828]: I1205 19:51:41.144601 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43692314-f749-4fd5-ac51-076c2a51301f-utilities\") pod \"community-operators-x6c2j\" (UID: \"43692314-f749-4fd5-ac51-076c2a51301f\") " pod="openshift-marketplace/community-operators-x6c2j" Dec 05 19:51:41 crc kubenswrapper[4828]: I1205 19:51:41.144790 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43692314-f749-4fd5-ac51-076c2a51301f-catalog-content\") pod \"community-operators-x6c2j\" (UID: \"43692314-f749-4fd5-ac51-076c2a51301f\") " pod="openshift-marketplace/community-operators-x6c2j" Dec 05 19:51:41 crc kubenswrapper[4828]: I1205 19:51:41.145012 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43692314-f749-4fd5-ac51-076c2a51301f-utilities\") pod \"community-operators-x6c2j\" (UID: \"43692314-f749-4fd5-ac51-076c2a51301f\") " pod="openshift-marketplace/community-operators-x6c2j" Dec 05 19:51:41 crc kubenswrapper[4828]: I1205 19:51:41.184735 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzzqq\" (UniqueName: \"kubernetes.io/projected/43692314-f749-4fd5-ac51-076c2a51301f-kube-api-access-lzzqq\") pod \"community-operators-x6c2j\" (UID: \"43692314-f749-4fd5-ac51-076c2a51301f\") " pod="openshift-marketplace/community-operators-x6c2j" Dec 05 19:51:41 crc kubenswrapper[4828]: I1205 19:51:41.397707 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x6c2j" Dec 05 19:51:41 crc kubenswrapper[4828]: W1205 19:51:41.923568 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43692314_f749_4fd5_ac51_076c2a51301f.slice/crio-ee6e606c65dd0085bf3003e21ab7d4eb0bc275b58dc9cf13d4c2d0e2256c5c19 WatchSource:0}: Error finding container ee6e606c65dd0085bf3003e21ab7d4eb0bc275b58dc9cf13d4c2d0e2256c5c19: Status 404 returned error can't find the container with id ee6e606c65dd0085bf3003e21ab7d4eb0bc275b58dc9cf13d4c2d0e2256c5c19 Dec 05 19:51:41 crc kubenswrapper[4828]: I1205 19:51:41.932766 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x6c2j"] Dec 05 19:51:42 crc kubenswrapper[4828]: I1205 19:51:42.469370 4828 generic.go:334] "Generic (PLEG): container finished" podID="43692314-f749-4fd5-ac51-076c2a51301f" containerID="29e15b6a051d755eca04132a40c2217e2116dbf371f27d294f3771be40e53634" exitCode=0 Dec 05 19:51:42 crc kubenswrapper[4828]: I1205 19:51:42.469715 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6c2j" event={"ID":"43692314-f749-4fd5-ac51-076c2a51301f","Type":"ContainerDied","Data":"29e15b6a051d755eca04132a40c2217e2116dbf371f27d294f3771be40e53634"} Dec 05 19:51:42 crc kubenswrapper[4828]: I1205 19:51:42.469742 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6c2j" event={"ID":"43692314-f749-4fd5-ac51-076c2a51301f","Type":"ContainerStarted","Data":"ee6e606c65dd0085bf3003e21ab7d4eb0bc275b58dc9cf13d4c2d0e2256c5c19"} Dec 05 19:51:43 crc kubenswrapper[4828]: I1205 19:51:43.482477 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6c2j" event={"ID":"43692314-f749-4fd5-ac51-076c2a51301f","Type":"ContainerStarted","Data":"886c73f8b79c02c650c0f17abc5340f6452e07b7a22905610fe2aef2b51bdc87"} Dec 05 19:51:44 crc kubenswrapper[4828]: I1205 19:51:44.493692 4828 generic.go:334] "Generic (PLEG): container finished" podID="43692314-f749-4fd5-ac51-076c2a51301f" containerID="886c73f8b79c02c650c0f17abc5340f6452e07b7a22905610fe2aef2b51bdc87" exitCode=0 Dec 05 19:51:44 crc kubenswrapper[4828]: I1205 19:51:44.493741 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6c2j" event={"ID":"43692314-f749-4fd5-ac51-076c2a51301f","Type":"ContainerDied","Data":"886c73f8b79c02c650c0f17abc5340f6452e07b7a22905610fe2aef2b51bdc87"} Dec 05 19:51:45 crc kubenswrapper[4828]: I1205 19:51:45.504541 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6c2j" event={"ID":"43692314-f749-4fd5-ac51-076c2a51301f","Type":"ContainerStarted","Data":"b74d4e270afdd1e296c53740c8c766cb9aaf2ca5aa5b12e31d4ad8cde8142437"} Dec 05 19:51:51 crc kubenswrapper[4828]: I1205 19:51:51.398063 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x6c2j" Dec 05 19:51:51 crc kubenswrapper[4828]: I1205 19:51:51.398736 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x6c2j" Dec 05 19:51:51 crc kubenswrapper[4828]: I1205 19:51:51.473351 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x6c2j" Dec 05 19:51:51 crc kubenswrapper[4828]: I1205 19:51:51.493343 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x6c2j" podStartSLOduration=8.07944593 podStartE2EDuration="10.493321968s" podCreationTimestamp="2025-12-05 19:51:41 +0000 UTC" firstStartedPulling="2025-12-05 19:51:42.475522688 +0000 UTC m=+2880.370745004" lastFinishedPulling="2025-12-05 19:51:44.889398736 +0000 UTC m=+2882.784621042" observedRunningTime="2025-12-05 19:51:45.528380588 +0000 UTC m=+2883.423602894" watchObservedRunningTime="2025-12-05 19:51:51.493321968 +0000 UTC m=+2889.388544284" Dec 05 19:51:51 crc kubenswrapper[4828]: I1205 19:51:51.628085 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x6c2j" Dec 05 19:51:51 crc kubenswrapper[4828]: I1205 19:51:51.707167 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x6c2j"] Dec 05 19:51:53 crc kubenswrapper[4828]: I1205 19:51:53.590408 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x6c2j" podUID="43692314-f749-4fd5-ac51-076c2a51301f" containerName="registry-server" containerID="cri-o://b74d4e270afdd1e296c53740c8c766cb9aaf2ca5aa5b12e31d4ad8cde8142437" gracePeriod=2 Dec 05 19:51:54 crc kubenswrapper[4828]: I1205 19:51:54.108544 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x6c2j" Dec 05 19:51:54 crc kubenswrapper[4828]: I1205 19:51:54.192273 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzzqq\" (UniqueName: \"kubernetes.io/projected/43692314-f749-4fd5-ac51-076c2a51301f-kube-api-access-lzzqq\") pod \"43692314-f749-4fd5-ac51-076c2a51301f\" (UID: \"43692314-f749-4fd5-ac51-076c2a51301f\") " Dec 05 19:51:54 crc kubenswrapper[4828]: I1205 19:51:54.192351 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43692314-f749-4fd5-ac51-076c2a51301f-utilities\") pod \"43692314-f749-4fd5-ac51-076c2a51301f\" (UID: \"43692314-f749-4fd5-ac51-076c2a51301f\") " Dec 05 19:51:54 crc kubenswrapper[4828]: I1205 19:51:54.192589 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43692314-f749-4fd5-ac51-076c2a51301f-catalog-content\") pod \"43692314-f749-4fd5-ac51-076c2a51301f\" (UID: \"43692314-f749-4fd5-ac51-076c2a51301f\") " Dec 05 19:51:54 crc kubenswrapper[4828]: I1205 19:51:54.193446 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43692314-f749-4fd5-ac51-076c2a51301f-utilities" (OuterVolumeSpecName: "utilities") pod "43692314-f749-4fd5-ac51-076c2a51301f" (UID: "43692314-f749-4fd5-ac51-076c2a51301f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:51:54 crc kubenswrapper[4828]: I1205 19:51:54.202762 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43692314-f749-4fd5-ac51-076c2a51301f-kube-api-access-lzzqq" (OuterVolumeSpecName: "kube-api-access-lzzqq") pod "43692314-f749-4fd5-ac51-076c2a51301f" (UID: "43692314-f749-4fd5-ac51-076c2a51301f"). InnerVolumeSpecName "kube-api-access-lzzqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:51:54 crc kubenswrapper[4828]: I1205 19:51:54.249551 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43692314-f749-4fd5-ac51-076c2a51301f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "43692314-f749-4fd5-ac51-076c2a51301f" (UID: "43692314-f749-4fd5-ac51-076c2a51301f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:51:54 crc kubenswrapper[4828]: I1205 19:51:54.295420 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43692314-f749-4fd5-ac51-076c2a51301f-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 19:51:54 crc kubenswrapper[4828]: I1205 19:51:54.295466 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43692314-f749-4fd5-ac51-076c2a51301f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 19:51:54 crc kubenswrapper[4828]: I1205 19:51:54.295481 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzzqq\" (UniqueName: \"kubernetes.io/projected/43692314-f749-4fd5-ac51-076c2a51301f-kube-api-access-lzzqq\") on node \"crc\" DevicePath \"\"" Dec 05 19:51:54 crc kubenswrapper[4828]: I1205 19:51:54.603819 4828 generic.go:334] "Generic (PLEG): container finished" podID="43692314-f749-4fd5-ac51-076c2a51301f" containerID="b74d4e270afdd1e296c53740c8c766cb9aaf2ca5aa5b12e31d4ad8cde8142437" exitCode=0 Dec 05 19:51:54 crc kubenswrapper[4828]: I1205 19:51:54.603873 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6c2j" event={"ID":"43692314-f749-4fd5-ac51-076c2a51301f","Type":"ContainerDied","Data":"b74d4e270afdd1e296c53740c8c766cb9aaf2ca5aa5b12e31d4ad8cde8142437"} Dec 05 19:51:54 crc kubenswrapper[4828]: I1205 19:51:54.603928 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6c2j" event={"ID":"43692314-f749-4fd5-ac51-076c2a51301f","Type":"ContainerDied","Data":"ee6e606c65dd0085bf3003e21ab7d4eb0bc275b58dc9cf13d4c2d0e2256c5c19"} Dec 05 19:51:54 crc kubenswrapper[4828]: I1205 19:51:54.603950 4828 scope.go:117] "RemoveContainer" containerID="b74d4e270afdd1e296c53740c8c766cb9aaf2ca5aa5b12e31d4ad8cde8142437" Dec 05 19:51:54 crc kubenswrapper[4828]: I1205 19:51:54.604671 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x6c2j" Dec 05 19:51:54 crc kubenswrapper[4828]: I1205 19:51:54.625653 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x6c2j"] Dec 05 19:51:54 crc kubenswrapper[4828]: I1205 19:51:54.629034 4828 scope.go:117] "RemoveContainer" containerID="886c73f8b79c02c650c0f17abc5340f6452e07b7a22905610fe2aef2b51bdc87" Dec 05 19:51:54 crc kubenswrapper[4828]: I1205 19:51:54.636699 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x6c2j"] Dec 05 19:51:54 crc kubenswrapper[4828]: I1205 19:51:54.648907 4828 scope.go:117] "RemoveContainer" containerID="29e15b6a051d755eca04132a40c2217e2116dbf371f27d294f3771be40e53634" Dec 05 19:51:54 crc kubenswrapper[4828]: I1205 19:51:54.690997 4828 scope.go:117] "RemoveContainer" containerID="b74d4e270afdd1e296c53740c8c766cb9aaf2ca5aa5b12e31d4ad8cde8142437" Dec 05 19:51:54 crc kubenswrapper[4828]: E1205 19:51:54.695097 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b74d4e270afdd1e296c53740c8c766cb9aaf2ca5aa5b12e31d4ad8cde8142437\": container with ID starting with b74d4e270afdd1e296c53740c8c766cb9aaf2ca5aa5b12e31d4ad8cde8142437 not found: ID does not exist" containerID="b74d4e270afdd1e296c53740c8c766cb9aaf2ca5aa5b12e31d4ad8cde8142437" Dec 05 19:51:54 crc kubenswrapper[4828]: I1205 19:51:54.695152 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b74d4e270afdd1e296c53740c8c766cb9aaf2ca5aa5b12e31d4ad8cde8142437"} err="failed to get container status \"b74d4e270afdd1e296c53740c8c766cb9aaf2ca5aa5b12e31d4ad8cde8142437\": rpc error: code = NotFound desc = could not find container \"b74d4e270afdd1e296c53740c8c766cb9aaf2ca5aa5b12e31d4ad8cde8142437\": container with ID starting with b74d4e270afdd1e296c53740c8c766cb9aaf2ca5aa5b12e31d4ad8cde8142437 not found: ID does not exist" Dec 05 19:51:54 crc kubenswrapper[4828]: I1205 19:51:54.695179 4828 scope.go:117] "RemoveContainer" containerID="886c73f8b79c02c650c0f17abc5340f6452e07b7a22905610fe2aef2b51bdc87" Dec 05 19:51:54 crc kubenswrapper[4828]: E1205 19:51:54.695507 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"886c73f8b79c02c650c0f17abc5340f6452e07b7a22905610fe2aef2b51bdc87\": container with ID starting with 886c73f8b79c02c650c0f17abc5340f6452e07b7a22905610fe2aef2b51bdc87 not found: ID does not exist" containerID="886c73f8b79c02c650c0f17abc5340f6452e07b7a22905610fe2aef2b51bdc87" Dec 05 19:51:54 crc kubenswrapper[4828]: I1205 19:51:54.695540 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"886c73f8b79c02c650c0f17abc5340f6452e07b7a22905610fe2aef2b51bdc87"} err="failed to get container status \"886c73f8b79c02c650c0f17abc5340f6452e07b7a22905610fe2aef2b51bdc87\": rpc error: code = NotFound desc = could not find container \"886c73f8b79c02c650c0f17abc5340f6452e07b7a22905610fe2aef2b51bdc87\": container with ID starting with 886c73f8b79c02c650c0f17abc5340f6452e07b7a22905610fe2aef2b51bdc87 not found: ID does not exist" Dec 05 19:51:54 crc kubenswrapper[4828]: I1205 19:51:54.695553 4828 scope.go:117] "RemoveContainer" containerID="29e15b6a051d755eca04132a40c2217e2116dbf371f27d294f3771be40e53634" Dec 05 19:51:54 crc kubenswrapper[4828]: E1205 19:51:54.695741 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29e15b6a051d755eca04132a40c2217e2116dbf371f27d294f3771be40e53634\": container with ID starting with 29e15b6a051d755eca04132a40c2217e2116dbf371f27d294f3771be40e53634 not found: ID does not exist" containerID="29e15b6a051d755eca04132a40c2217e2116dbf371f27d294f3771be40e53634" Dec 05 19:51:54 crc kubenswrapper[4828]: I1205 19:51:54.695759 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29e15b6a051d755eca04132a40c2217e2116dbf371f27d294f3771be40e53634"} err="failed to get container status \"29e15b6a051d755eca04132a40c2217e2116dbf371f27d294f3771be40e53634\": rpc error: code = NotFound desc = could not find container \"29e15b6a051d755eca04132a40c2217e2116dbf371f27d294f3771be40e53634\": container with ID starting with 29e15b6a051d755eca04132a40c2217e2116dbf371f27d294f3771be40e53634 not found: ID does not exist" Dec 05 19:51:56 crc kubenswrapper[4828]: I1205 19:51:56.458101 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43692314-f749-4fd5-ac51-076c2a51301f" path="/var/lib/kubelet/pods/43692314-f749-4fd5-ac51-076c2a51301f/volumes" Dec 05 19:52:01 crc kubenswrapper[4828]: I1205 19:52:01.688262 4828 generic.go:334] "Generic (PLEG): container finished" podID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" containerID="e9977cd846559568215f863e86eb79e64661995a673b7472e5d29d74ccee6b8c" exitCode=1 Dec 05 19:52:01 crc kubenswrapper[4828]: I1205 19:52:01.688281 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" event={"ID":"03c4fc5d-6be1-47b4-9c39-7bb86046dafd","Type":"ContainerDied","Data":"e9977cd846559568215f863e86eb79e64661995a673b7472e5d29d74ccee6b8c"} Dec 05 19:52:01 crc kubenswrapper[4828]: I1205 19:52:01.688842 4828 scope.go:117] "RemoveContainer" containerID="a779f86b0916f1e6e49b4ba52144379301c8e78e68e352632dc079017c30d154" Dec 05 19:52:01 crc kubenswrapper[4828]: I1205 19:52:01.689455 4828 scope.go:117] "RemoveContainer" containerID="e9977cd846559568215f863e86eb79e64661995a673b7472e5d29d74ccee6b8c" Dec 05 19:52:01 crc kubenswrapper[4828]: E1205 19:52:01.689876 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:52:05 crc kubenswrapper[4828]: I1205 19:52:05.117783 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:52:05 crc kubenswrapper[4828]: I1205 19:52:05.118313 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:52:05 crc kubenswrapper[4828]: I1205 19:52:05.118946 4828 scope.go:117] "RemoveContainer" containerID="e9977cd846559568215f863e86eb79e64661995a673b7472e5d29d74ccee6b8c" Dec 05 19:52:05 crc kubenswrapper[4828]: E1205 19:52:05.119157 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:52:05 crc kubenswrapper[4828]: I1205 19:52:05.259726 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:52:05 crc kubenswrapper[4828]: I1205 19:52:05.259875 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:52:14 crc kubenswrapper[4828]: I1205 19:52:14.823956 4828 generic.go:334] "Generic (PLEG): container finished" podID="e0dde2a7-439b-4b5a-8e4b-363089a9879a" containerID="da2741f8c220741823b79cfa4c281e0f67ec84d335fc4877bdcd4a3e069948e0" exitCode=0 Dec 05 19:52:14 crc kubenswrapper[4828]: I1205 19:52:14.824115 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b" event={"ID":"e0dde2a7-439b-4b5a-8e4b-363089a9879a","Type":"ContainerDied","Data":"da2741f8c220741823b79cfa4c281e0f67ec84d335fc4877bdcd4a3e069948e0"} Dec 05 19:52:16 crc kubenswrapper[4828]: I1205 19:52:16.272755 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b" Dec 05 19:52:16 crc kubenswrapper[4828]: I1205 19:52:16.439912 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-ceilometer-compute-config-data-2\") pod \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\" (UID: \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\") " Dec 05 19:52:16 crc kubenswrapper[4828]: I1205 19:52:16.440008 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-ceilometer-compute-config-data-1\") pod \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\" (UID: \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\") " Dec 05 19:52:16 crc kubenswrapper[4828]: I1205 19:52:16.440134 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-ceilometer-compute-config-data-0\") pod \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\" (UID: \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\") " Dec 05 19:52:16 crc kubenswrapper[4828]: I1205 19:52:16.440202 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-telemetry-combined-ca-bundle\") pod \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\" (UID: \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\") " Dec 05 19:52:16 crc kubenswrapper[4828]: I1205 19:52:16.440303 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-ssh-key\") pod \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\" (UID: \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\") " Dec 05 19:52:16 crc kubenswrapper[4828]: I1205 19:52:16.440432 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wszg6\" (UniqueName: \"kubernetes.io/projected/e0dde2a7-439b-4b5a-8e4b-363089a9879a-kube-api-access-wszg6\") pod \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\" (UID: \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\") " Dec 05 19:52:16 crc kubenswrapper[4828]: I1205 19:52:16.440466 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-inventory\") pod \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\" (UID: \"e0dde2a7-439b-4b5a-8e4b-363089a9879a\") " Dec 05 19:52:16 crc kubenswrapper[4828]: I1205 19:52:16.447052 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "e0dde2a7-439b-4b5a-8e4b-363089a9879a" (UID: "e0dde2a7-439b-4b5a-8e4b-363089a9879a"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:52:16 crc kubenswrapper[4828]: I1205 19:52:16.447095 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0dde2a7-439b-4b5a-8e4b-363089a9879a-kube-api-access-wszg6" (OuterVolumeSpecName: "kube-api-access-wszg6") pod "e0dde2a7-439b-4b5a-8e4b-363089a9879a" (UID: "e0dde2a7-439b-4b5a-8e4b-363089a9879a"). InnerVolumeSpecName "kube-api-access-wszg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:52:16 crc kubenswrapper[4828]: I1205 19:52:16.472010 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e0dde2a7-439b-4b5a-8e4b-363089a9879a" (UID: "e0dde2a7-439b-4b5a-8e4b-363089a9879a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:52:16 crc kubenswrapper[4828]: I1205 19:52:16.479007 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-inventory" (OuterVolumeSpecName: "inventory") pod "e0dde2a7-439b-4b5a-8e4b-363089a9879a" (UID: "e0dde2a7-439b-4b5a-8e4b-363089a9879a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:52:16 crc kubenswrapper[4828]: I1205 19:52:16.481977 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "e0dde2a7-439b-4b5a-8e4b-363089a9879a" (UID: "e0dde2a7-439b-4b5a-8e4b-363089a9879a"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:52:16 crc kubenswrapper[4828]: I1205 19:52:16.482021 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "e0dde2a7-439b-4b5a-8e4b-363089a9879a" (UID: "e0dde2a7-439b-4b5a-8e4b-363089a9879a"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:52:16 crc kubenswrapper[4828]: I1205 19:52:16.485984 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "e0dde2a7-439b-4b5a-8e4b-363089a9879a" (UID: "e0dde2a7-439b-4b5a-8e4b-363089a9879a"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 19:52:16 crc kubenswrapper[4828]: I1205 19:52:16.543654 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wszg6\" (UniqueName: \"kubernetes.io/projected/e0dde2a7-439b-4b5a-8e4b-363089a9879a-kube-api-access-wszg6\") on node \"crc\" DevicePath \"\"" Dec 05 19:52:16 crc kubenswrapper[4828]: I1205 19:52:16.543695 4828 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 19:52:16 crc kubenswrapper[4828]: I1205 19:52:16.543710 4828 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Dec 05 19:52:16 crc kubenswrapper[4828]: I1205 19:52:16.543724 4828 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 05 19:52:16 crc kubenswrapper[4828]: I1205 19:52:16.543738 4828 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 05 19:52:16 crc kubenswrapper[4828]: I1205 19:52:16.543751 4828 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 19:52:16 crc kubenswrapper[4828]: I1205 19:52:16.543763 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0dde2a7-439b-4b5a-8e4b-363089a9879a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 19:52:16 crc kubenswrapper[4828]: I1205 19:52:16.857563 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b" event={"ID":"e0dde2a7-439b-4b5a-8e4b-363089a9879a","Type":"ContainerDied","Data":"2969ae0eced6fc9ca36dfaceec888652253c4ea8aa7bac13c236780d12f9812b"} Dec 05 19:52:16 crc kubenswrapper[4828]: I1205 19:52:16.857649 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2969ae0eced6fc9ca36dfaceec888652253c4ea8aa7bac13c236780d12f9812b" Dec 05 19:52:16 crc kubenswrapper[4828]: I1205 19:52:16.857734 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b" Dec 05 19:52:20 crc kubenswrapper[4828]: I1205 19:52:20.453968 4828 scope.go:117] "RemoveContainer" containerID="e9977cd846559568215f863e86eb79e64661995a673b7472e5d29d74ccee6b8c" Dec 05 19:52:20 crc kubenswrapper[4828]: E1205 19:52:20.454740 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:52:35 crc kubenswrapper[4828]: I1205 19:52:35.259401 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 19:52:35 crc kubenswrapper[4828]: I1205 19:52:35.259963 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 19:52:35 crc kubenswrapper[4828]: I1205 19:52:35.260019 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" Dec 05 19:52:35 crc kubenswrapper[4828]: I1205 19:52:35.260782 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5"} pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 19:52:35 crc kubenswrapper[4828]: I1205 19:52:35.260862 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" containerID="cri-o://c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5" gracePeriod=600 Dec 05 19:52:35 crc kubenswrapper[4828]: E1205 19:52:35.387290 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:52:35 crc kubenswrapper[4828]: I1205 19:52:35.446109 4828 scope.go:117] "RemoveContainer" containerID="e9977cd846559568215f863e86eb79e64661995a673b7472e5d29d74ccee6b8c" Dec 05 19:52:35 crc kubenswrapper[4828]: E1205 19:52:35.446457 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:52:36 crc kubenswrapper[4828]: I1205 19:52:36.036135 4828 generic.go:334] "Generic (PLEG): container finished" podID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerID="c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5" exitCode=0 Dec 05 19:52:36 crc kubenswrapper[4828]: I1205 19:52:36.036210 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerDied","Data":"c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5"} Dec 05 19:52:36 crc kubenswrapper[4828]: I1205 19:52:36.036478 4828 scope.go:117] "RemoveContainer" containerID="e1e4002a04773f5dc5d6b89fc28afdf2051b7eba9f56198738b63e196e04f834" Dec 05 19:52:36 crc kubenswrapper[4828]: I1205 19:52:36.037075 4828 scope.go:117] "RemoveContainer" containerID="c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5" Dec 05 19:52:36 crc kubenswrapper[4828]: E1205 19:52:36.037327 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:52:47 crc kubenswrapper[4828]: I1205 19:52:47.446948 4828 scope.go:117] "RemoveContainer" containerID="e9977cd846559568215f863e86eb79e64661995a673b7472e5d29d74ccee6b8c" Dec 05 19:52:47 crc kubenswrapper[4828]: E1205 19:52:47.448072 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:52:51 crc kubenswrapper[4828]: I1205 19:52:51.447103 4828 scope.go:117] "RemoveContainer" containerID="c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5" Dec 05 19:52:51 crc kubenswrapper[4828]: E1205 19:52:51.448225 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:52:58 crc kubenswrapper[4828]: I1205 19:52:58.446913 4828 scope.go:117] "RemoveContainer" containerID="e9977cd846559568215f863e86eb79e64661995a673b7472e5d29d74ccee6b8c" Dec 05 19:52:58 crc kubenswrapper[4828]: E1205 19:52:58.447931 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.430726 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Dec 05 19:52:59 crc kubenswrapper[4828]: E1205 19:52:59.431547 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43692314-f749-4fd5-ac51-076c2a51301f" containerName="extract-content" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.431572 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="43692314-f749-4fd5-ac51-076c2a51301f" containerName="extract-content" Dec 05 19:52:59 crc kubenswrapper[4828]: E1205 19:52:59.431597 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43692314-f749-4fd5-ac51-076c2a51301f" containerName="registry-server" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.431607 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="43692314-f749-4fd5-ac51-076c2a51301f" containerName="registry-server" Dec 05 19:52:59 crc kubenswrapper[4828]: E1205 19:52:59.431628 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43692314-f749-4fd5-ac51-076c2a51301f" containerName="extract-utilities" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.431636 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="43692314-f749-4fd5-ac51-076c2a51301f" containerName="extract-utilities" Dec 05 19:52:59 crc kubenswrapper[4828]: E1205 19:52:59.431673 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0dde2a7-439b-4b5a-8e4b-363089a9879a" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.431684 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0dde2a7-439b-4b5a-8e4b-363089a9879a" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.432033 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0dde2a7-439b-4b5a-8e4b-363089a9879a" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.432074 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="43692314-f749-4fd5-ac51-076c2a51301f" containerName="registry-server" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.433114 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.435685 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-vlxgx" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.437427 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.437540 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.445553 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.469682 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.604027 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9d71b946-ed36-403c-9faf-feb03f741474-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " pod="openstack/tempest-tests-tempest" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.604338 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9d71b946-ed36-403c-9faf-feb03f741474-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " pod="openstack/tempest-tests-tempest" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.604437 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spl4f\" (UniqueName: \"kubernetes.io/projected/9d71b946-ed36-403c-9faf-feb03f741474-kube-api-access-spl4f\") pod \"tempest-tests-tempest\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " pod="openstack/tempest-tests-tempest" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.604501 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " pod="openstack/tempest-tests-tempest" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.604535 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d71b946-ed36-403c-9faf-feb03f741474-config-data\") pod \"tempest-tests-tempest\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " pod="openstack/tempest-tests-tempest" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.604620 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9d71b946-ed36-403c-9faf-feb03f741474-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " pod="openstack/tempest-tests-tempest" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.604695 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9d71b946-ed36-403c-9faf-feb03f741474-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " pod="openstack/tempest-tests-tempest" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.604808 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9d71b946-ed36-403c-9faf-feb03f741474-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " pod="openstack/tempest-tests-tempest" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.604895 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9d71b946-ed36-403c-9faf-feb03f741474-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " pod="openstack/tempest-tests-tempest" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.706378 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d71b946-ed36-403c-9faf-feb03f741474-config-data\") pod \"tempest-tests-tempest\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " pod="openstack/tempest-tests-tempest" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.706450 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9d71b946-ed36-403c-9faf-feb03f741474-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " pod="openstack/tempest-tests-tempest" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.706496 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9d71b946-ed36-403c-9faf-feb03f741474-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " pod="openstack/tempest-tests-tempest" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.706549 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9d71b946-ed36-403c-9faf-feb03f741474-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " pod="openstack/tempest-tests-tempest" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.706579 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9d71b946-ed36-403c-9faf-feb03f741474-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " pod="openstack/tempest-tests-tempest" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.706600 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9d71b946-ed36-403c-9faf-feb03f741474-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " pod="openstack/tempest-tests-tempest" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.706639 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9d71b946-ed36-403c-9faf-feb03f741474-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " pod="openstack/tempest-tests-tempest" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.706673 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spl4f\" (UniqueName: \"kubernetes.io/projected/9d71b946-ed36-403c-9faf-feb03f741474-kube-api-access-spl4f\") pod \"tempest-tests-tempest\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " pod="openstack/tempest-tests-tempest" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.706713 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " pod="openstack/tempest-tests-tempest" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.707085 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/tempest-tests-tempest" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.707340 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9d71b946-ed36-403c-9faf-feb03f741474-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " pod="openstack/tempest-tests-tempest" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.708442 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9d71b946-ed36-403c-9faf-feb03f741474-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " pod="openstack/tempest-tests-tempest" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.708995 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d71b946-ed36-403c-9faf-feb03f741474-config-data\") pod \"tempest-tests-tempest\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " pod="openstack/tempest-tests-tempest" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.711313 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9d71b946-ed36-403c-9faf-feb03f741474-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " pod="openstack/tempest-tests-tempest" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.715156 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9d71b946-ed36-403c-9faf-feb03f741474-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " pod="openstack/tempest-tests-tempest" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.724054 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9d71b946-ed36-403c-9faf-feb03f741474-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " pod="openstack/tempest-tests-tempest" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.725006 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9d71b946-ed36-403c-9faf-feb03f741474-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " pod="openstack/tempest-tests-tempest" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.737605 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spl4f\" (UniqueName: \"kubernetes.io/projected/9d71b946-ed36-403c-9faf-feb03f741474-kube-api-access-spl4f\") pod \"tempest-tests-tempest\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " pod="openstack/tempest-tests-tempest" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.751408 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " pod="openstack/tempest-tests-tempest" Dec 05 19:52:59 crc kubenswrapper[4828]: I1205 19:52:59.774081 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 05 19:53:00 crc kubenswrapper[4828]: I1205 19:53:00.222409 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 05 19:53:00 crc kubenswrapper[4828]: I1205 19:53:00.232413 4828 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 19:53:00 crc kubenswrapper[4828]: I1205 19:53:00.305388 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9d71b946-ed36-403c-9faf-feb03f741474","Type":"ContainerStarted","Data":"8df6794f19db3051c4553a581432707d3696f3c6cae911c876c4daba23bf52cb"} Dec 05 19:53:04 crc kubenswrapper[4828]: I1205 19:53:04.446334 4828 scope.go:117] "RemoveContainer" containerID="c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5" Dec 05 19:53:04 crc kubenswrapper[4828]: E1205 19:53:04.447020 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:53:09 crc kubenswrapper[4828]: I1205 19:53:09.447235 4828 scope.go:117] "RemoveContainer" containerID="e9977cd846559568215f863e86eb79e64661995a673b7472e5d29d74ccee6b8c" Dec 05 19:53:09 crc kubenswrapper[4828]: E1205 19:53:09.447997 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:53:16 crc kubenswrapper[4828]: I1205 19:53:16.446926 4828 scope.go:117] "RemoveContainer" containerID="c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5" Dec 05 19:53:16 crc kubenswrapper[4828]: E1205 19:53:16.447621 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:53:20 crc kubenswrapper[4828]: I1205 19:53:20.447306 4828 scope.go:117] "RemoveContainer" containerID="e9977cd846559568215f863e86eb79e64661995a673b7472e5d29d74ccee6b8c" Dec 05 19:53:20 crc kubenswrapper[4828]: E1205 19:53:20.448132 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:53:30 crc kubenswrapper[4828]: E1205 19:53:30.179029 4828 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Dec 05 19:53:30 crc kubenswrapper[4828]: E1205 19:53:30.180075 4828 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-spl4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(9d71b946-ed36-403c-9faf-feb03f741474): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 19:53:30 crc kubenswrapper[4828]: E1205 19:53:30.181699 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="9d71b946-ed36-403c-9faf-feb03f741474" Dec 05 19:53:30 crc kubenswrapper[4828]: I1205 19:53:30.447477 4828 scope.go:117] "RemoveContainer" containerID="c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5" Dec 05 19:53:30 crc kubenswrapper[4828]: E1205 19:53:30.447984 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:53:30 crc kubenswrapper[4828]: E1205 19:53:30.625254 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="9d71b946-ed36-403c-9faf-feb03f741474" Dec 05 19:53:33 crc kubenswrapper[4828]: I1205 19:53:33.446655 4828 scope.go:117] "RemoveContainer" containerID="e9977cd846559568215f863e86eb79e64661995a673b7472e5d29d74ccee6b8c" Dec 05 19:53:33 crc kubenswrapper[4828]: E1205 19:53:33.447505 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:53:41 crc kubenswrapper[4828]: I1205 19:53:41.447872 4828 scope.go:117] "RemoveContainer" containerID="c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5" Dec 05 19:53:41 crc kubenswrapper[4828]: E1205 19:53:41.448541 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:53:45 crc kubenswrapper[4828]: I1205 19:53:45.446560 4828 scope.go:117] "RemoveContainer" containerID="e9977cd846559568215f863e86eb79e64661995a673b7472e5d29d74ccee6b8c" Dec 05 19:53:45 crc kubenswrapper[4828]: E1205 19:53:45.447923 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:53:46 crc kubenswrapper[4828]: I1205 19:53:46.112912 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 05 19:53:47 crc kubenswrapper[4828]: I1205 19:53:47.819621 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9d71b946-ed36-403c-9faf-feb03f741474","Type":"ContainerStarted","Data":"0b263ad96f0dffce32f22dd587fdff65d725d6e8242d39dd02f9eab7f01fd73f"} Dec 05 19:53:47 crc kubenswrapper[4828]: I1205 19:53:47.844333 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.966815353 podStartE2EDuration="49.844312915s" podCreationTimestamp="2025-12-05 19:52:58 +0000 UTC" firstStartedPulling="2025-12-05 19:53:00.232103622 +0000 UTC m=+2958.127325938" lastFinishedPulling="2025-12-05 19:53:46.109601194 +0000 UTC m=+3004.004823500" observedRunningTime="2025-12-05 19:53:47.838069066 +0000 UTC m=+3005.733291372" watchObservedRunningTime="2025-12-05 19:53:47.844312915 +0000 UTC m=+3005.739535221" Dec 05 19:53:54 crc kubenswrapper[4828]: I1205 19:53:54.447115 4828 scope.go:117] "RemoveContainer" containerID="c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5" Dec 05 19:53:54 crc kubenswrapper[4828]: E1205 19:53:54.447814 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:53:58 crc kubenswrapper[4828]: I1205 19:53:58.447421 4828 scope.go:117] "RemoveContainer" containerID="e9977cd846559568215f863e86eb79e64661995a673b7472e5d29d74ccee6b8c" Dec 05 19:53:58 crc kubenswrapper[4828]: E1205 19:53:58.448465 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:54:06 crc kubenswrapper[4828]: I1205 19:54:06.446144 4828 scope.go:117] "RemoveContainer" containerID="c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5" Dec 05 19:54:06 crc kubenswrapper[4828]: E1205 19:54:06.446853 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:54:12 crc kubenswrapper[4828]: I1205 19:54:12.458045 4828 scope.go:117] "RemoveContainer" containerID="e9977cd846559568215f863e86eb79e64661995a673b7472e5d29d74ccee6b8c" Dec 05 19:54:12 crc kubenswrapper[4828]: E1205 19:54:12.460127 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:54:17 crc kubenswrapper[4828]: I1205 19:54:17.447128 4828 scope.go:117] "RemoveContainer" containerID="c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5" Dec 05 19:54:17 crc kubenswrapper[4828]: E1205 19:54:17.447957 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:54:26 crc kubenswrapper[4828]: I1205 19:54:26.447227 4828 scope.go:117] "RemoveContainer" containerID="e9977cd846559568215f863e86eb79e64661995a673b7472e5d29d74ccee6b8c" Dec 05 19:54:26 crc kubenswrapper[4828]: E1205 19:54:26.447807 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:54:28 crc kubenswrapper[4828]: I1205 19:54:28.446604 4828 scope.go:117] "RemoveContainer" containerID="c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5" Dec 05 19:54:28 crc kubenswrapper[4828]: E1205 19:54:28.447264 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:54:38 crc kubenswrapper[4828]: I1205 19:54:38.446972 4828 scope.go:117] "RemoveContainer" containerID="e9977cd846559568215f863e86eb79e64661995a673b7472e5d29d74ccee6b8c" Dec 05 19:54:38 crc kubenswrapper[4828]: E1205 19:54:38.447741 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:54:39 crc kubenswrapper[4828]: I1205 19:54:39.446865 4828 scope.go:117] "RemoveContainer" containerID="c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5" Dec 05 19:54:39 crc kubenswrapper[4828]: E1205 19:54:39.447469 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:54:52 crc kubenswrapper[4828]: I1205 19:54:52.451246 4828 scope.go:117] "RemoveContainer" containerID="c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5" Dec 05 19:54:52 crc kubenswrapper[4828]: E1205 19:54:52.451889 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:54:53 crc kubenswrapper[4828]: I1205 19:54:53.446667 4828 scope.go:117] "RemoveContainer" containerID="e9977cd846559568215f863e86eb79e64661995a673b7472e5d29d74ccee6b8c" Dec 05 19:54:53 crc kubenswrapper[4828]: E1205 19:54:53.447192 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:55:04 crc kubenswrapper[4828]: I1205 19:55:04.446095 4828 scope.go:117] "RemoveContainer" containerID="c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5" Dec 05 19:55:04 crc kubenswrapper[4828]: E1205 19:55:04.446949 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:55:08 crc kubenswrapper[4828]: I1205 19:55:08.447654 4828 scope.go:117] "RemoveContainer" containerID="e9977cd846559568215f863e86eb79e64661995a673b7472e5d29d74ccee6b8c" Dec 05 19:55:08 crc kubenswrapper[4828]: E1205 19:55:08.448347 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:55:17 crc kubenswrapper[4828]: I1205 19:55:17.446639 4828 scope.go:117] "RemoveContainer" containerID="c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5" Dec 05 19:55:17 crc kubenswrapper[4828]: E1205 19:55:17.447978 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:55:19 crc kubenswrapper[4828]: I1205 19:55:19.447436 4828 scope.go:117] "RemoveContainer" containerID="e9977cd846559568215f863e86eb79e64661995a673b7472e5d29d74ccee6b8c" Dec 05 19:55:19 crc kubenswrapper[4828]: E1205 19:55:19.448108 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:55:30 crc kubenswrapper[4828]: I1205 19:55:30.447269 4828 scope.go:117] "RemoveContainer" containerID="c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5" Dec 05 19:55:30 crc kubenswrapper[4828]: I1205 19:55:30.448146 4828 scope.go:117] "RemoveContainer" containerID="e9977cd846559568215f863e86eb79e64661995a673b7472e5d29d74ccee6b8c" Dec 05 19:55:30 crc kubenswrapper[4828]: E1205 19:55:30.448319 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:55:30 crc kubenswrapper[4828]: E1205 19:55:30.448592 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:55:41 crc kubenswrapper[4828]: I1205 19:55:41.447123 4828 scope.go:117] "RemoveContainer" containerID="c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5" Dec 05 19:55:41 crc kubenswrapper[4828]: E1205 19:55:41.447978 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:55:43 crc kubenswrapper[4828]: I1205 19:55:43.447656 4828 scope.go:117] "RemoveContainer" containerID="e9977cd846559568215f863e86eb79e64661995a673b7472e5d29d74ccee6b8c" Dec 05 19:55:43 crc kubenswrapper[4828]: E1205 19:55:43.448529 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:55:44 crc kubenswrapper[4828]: I1205 19:55:44.359927 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fzhq7"] Dec 05 19:55:44 crc kubenswrapper[4828]: I1205 19:55:44.390279 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzhq7" Dec 05 19:55:44 crc kubenswrapper[4828]: I1205 19:55:44.398375 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fzhq7"] Dec 05 19:55:44 crc kubenswrapper[4828]: I1205 19:55:44.586508 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f976z\" (UniqueName: \"kubernetes.io/projected/4d52783c-9198-4201-b9d4-30ca439bb13b-kube-api-access-f976z\") pod \"redhat-operators-fzhq7\" (UID: \"4d52783c-9198-4201-b9d4-30ca439bb13b\") " pod="openshift-marketplace/redhat-operators-fzhq7" Dec 05 19:55:44 crc kubenswrapper[4828]: I1205 19:55:44.586635 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d52783c-9198-4201-b9d4-30ca439bb13b-utilities\") pod \"redhat-operators-fzhq7\" (UID: \"4d52783c-9198-4201-b9d4-30ca439bb13b\") " pod="openshift-marketplace/redhat-operators-fzhq7" Dec 05 19:55:44 crc kubenswrapper[4828]: I1205 19:55:44.586662 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d52783c-9198-4201-b9d4-30ca439bb13b-catalog-content\") pod \"redhat-operators-fzhq7\" (UID: \"4d52783c-9198-4201-b9d4-30ca439bb13b\") " pod="openshift-marketplace/redhat-operators-fzhq7" Dec 05 19:55:44 crc kubenswrapper[4828]: I1205 19:55:44.688483 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d52783c-9198-4201-b9d4-30ca439bb13b-utilities\") pod \"redhat-operators-fzhq7\" (UID: \"4d52783c-9198-4201-b9d4-30ca439bb13b\") " pod="openshift-marketplace/redhat-operators-fzhq7" Dec 05 19:55:44 crc kubenswrapper[4828]: I1205 19:55:44.688537 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d52783c-9198-4201-b9d4-30ca439bb13b-catalog-content\") pod \"redhat-operators-fzhq7\" (UID: \"4d52783c-9198-4201-b9d4-30ca439bb13b\") " pod="openshift-marketplace/redhat-operators-fzhq7" Dec 05 19:55:44 crc kubenswrapper[4828]: I1205 19:55:44.688668 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f976z\" (UniqueName: \"kubernetes.io/projected/4d52783c-9198-4201-b9d4-30ca439bb13b-kube-api-access-f976z\") pod \"redhat-operators-fzhq7\" (UID: \"4d52783c-9198-4201-b9d4-30ca439bb13b\") " pod="openshift-marketplace/redhat-operators-fzhq7" Dec 05 19:55:44 crc kubenswrapper[4828]: I1205 19:55:44.689231 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d52783c-9198-4201-b9d4-30ca439bb13b-utilities\") pod \"redhat-operators-fzhq7\" (UID: \"4d52783c-9198-4201-b9d4-30ca439bb13b\") " pod="openshift-marketplace/redhat-operators-fzhq7" Dec 05 19:55:44 crc kubenswrapper[4828]: I1205 19:55:44.689344 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d52783c-9198-4201-b9d4-30ca439bb13b-catalog-content\") pod \"redhat-operators-fzhq7\" (UID: \"4d52783c-9198-4201-b9d4-30ca439bb13b\") " pod="openshift-marketplace/redhat-operators-fzhq7" Dec 05 19:55:44 crc kubenswrapper[4828]: I1205 19:55:44.710660 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f976z\" (UniqueName: \"kubernetes.io/projected/4d52783c-9198-4201-b9d4-30ca439bb13b-kube-api-access-f976z\") pod \"redhat-operators-fzhq7\" (UID: \"4d52783c-9198-4201-b9d4-30ca439bb13b\") " pod="openshift-marketplace/redhat-operators-fzhq7" Dec 05 19:55:44 crc kubenswrapper[4828]: I1205 19:55:44.723983 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzhq7" Dec 05 19:55:45 crc kubenswrapper[4828]: I1205 19:55:45.170107 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fzhq7"] Dec 05 19:55:46 crc kubenswrapper[4828]: I1205 19:55:46.032904 4828 generic.go:334] "Generic (PLEG): container finished" podID="4d52783c-9198-4201-b9d4-30ca439bb13b" containerID="fdcb79a7770f153e8da6abf17259a8363249126f1bdd3b88e010070c50330388" exitCode=0 Dec 05 19:55:46 crc kubenswrapper[4828]: I1205 19:55:46.032970 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzhq7" event={"ID":"4d52783c-9198-4201-b9d4-30ca439bb13b","Type":"ContainerDied","Data":"fdcb79a7770f153e8da6abf17259a8363249126f1bdd3b88e010070c50330388"} Dec 05 19:55:46 crc kubenswrapper[4828]: I1205 19:55:46.033383 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzhq7" event={"ID":"4d52783c-9198-4201-b9d4-30ca439bb13b","Type":"ContainerStarted","Data":"f321c24b7994aca66342177ec0d3c65d62052d1f36584fb64ece9b3bfb9ae21a"} Dec 05 19:55:47 crc kubenswrapper[4828]: I1205 19:55:47.045756 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzhq7" event={"ID":"4d52783c-9198-4201-b9d4-30ca439bb13b","Type":"ContainerStarted","Data":"25c0d74b898da69d194aa60c7828d48e844ff7637433dcc41772f7dd5c1e802f"} Dec 05 19:55:49 crc kubenswrapper[4828]: I1205 19:55:49.062526 4828 generic.go:334] "Generic (PLEG): container finished" podID="4d52783c-9198-4201-b9d4-30ca439bb13b" containerID="25c0d74b898da69d194aa60c7828d48e844ff7637433dcc41772f7dd5c1e802f" exitCode=0 Dec 05 19:55:49 crc kubenswrapper[4828]: I1205 19:55:49.062612 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzhq7" event={"ID":"4d52783c-9198-4201-b9d4-30ca439bb13b","Type":"ContainerDied","Data":"25c0d74b898da69d194aa60c7828d48e844ff7637433dcc41772f7dd5c1e802f"} Dec 05 19:55:50 crc kubenswrapper[4828]: I1205 19:55:50.073995 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzhq7" event={"ID":"4d52783c-9198-4201-b9d4-30ca439bb13b","Type":"ContainerStarted","Data":"7edc8212b5e9fdfef00a1780915f328c93bcec38451ae61d7cf6e50654ca8bc0"} Dec 05 19:55:50 crc kubenswrapper[4828]: I1205 19:55:50.103511 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fzhq7" podStartSLOduration=2.649027446 podStartE2EDuration="6.103490471s" podCreationTimestamp="2025-12-05 19:55:44 +0000 UTC" firstStartedPulling="2025-12-05 19:55:46.034785363 +0000 UTC m=+3123.930007669" lastFinishedPulling="2025-12-05 19:55:49.489248368 +0000 UTC m=+3127.384470694" observedRunningTime="2025-12-05 19:55:50.092988077 +0000 UTC m=+3127.988210383" watchObservedRunningTime="2025-12-05 19:55:50.103490471 +0000 UTC m=+3127.998712817" Dec 05 19:55:54 crc kubenswrapper[4828]: I1205 19:55:54.724985 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fzhq7" Dec 05 19:55:54 crc kubenswrapper[4828]: I1205 19:55:54.725576 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fzhq7" Dec 05 19:55:55 crc kubenswrapper[4828]: I1205 19:55:55.446889 4828 scope.go:117] "RemoveContainer" containerID="c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5" Dec 05 19:55:55 crc kubenswrapper[4828]: I1205 19:55:55.447051 4828 scope.go:117] "RemoveContainer" containerID="e9977cd846559568215f863e86eb79e64661995a673b7472e5d29d74ccee6b8c" Dec 05 19:55:55 crc kubenswrapper[4828]: E1205 19:55:55.447330 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:55:55 crc kubenswrapper[4828]: E1205 19:55:55.447390 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:55:55 crc kubenswrapper[4828]: I1205 19:55:55.777898 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fzhq7" podUID="4d52783c-9198-4201-b9d4-30ca439bb13b" containerName="registry-server" probeResult="failure" output=< Dec 05 19:55:55 crc kubenswrapper[4828]: timeout: failed to connect service ":50051" within 1s Dec 05 19:55:55 crc kubenswrapper[4828]: > Dec 05 19:56:04 crc kubenswrapper[4828]: I1205 19:56:04.989020 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fzhq7" Dec 05 19:56:05 crc kubenswrapper[4828]: I1205 19:56:05.071523 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fzhq7" Dec 05 19:56:05 crc kubenswrapper[4828]: I1205 19:56:05.238340 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fzhq7"] Dec 05 19:56:06 crc kubenswrapper[4828]: I1205 19:56:06.236455 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fzhq7" podUID="4d52783c-9198-4201-b9d4-30ca439bb13b" containerName="registry-server" containerID="cri-o://7edc8212b5e9fdfef00a1780915f328c93bcec38451ae61d7cf6e50654ca8bc0" gracePeriod=2 Dec 05 19:56:06 crc kubenswrapper[4828]: I1205 19:56:06.447863 4828 scope.go:117] "RemoveContainer" containerID="e9977cd846559568215f863e86eb79e64661995a673b7472e5d29d74ccee6b8c" Dec 05 19:56:06 crc kubenswrapper[4828]: E1205 19:56:06.448302 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:56:06 crc kubenswrapper[4828]: I1205 19:56:06.716628 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzhq7" Dec 05 19:56:06 crc kubenswrapper[4828]: I1205 19:56:06.913969 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d52783c-9198-4201-b9d4-30ca439bb13b-utilities\") pod \"4d52783c-9198-4201-b9d4-30ca439bb13b\" (UID: \"4d52783c-9198-4201-b9d4-30ca439bb13b\") " Dec 05 19:56:06 crc kubenswrapper[4828]: I1205 19:56:06.914426 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f976z\" (UniqueName: \"kubernetes.io/projected/4d52783c-9198-4201-b9d4-30ca439bb13b-kube-api-access-f976z\") pod \"4d52783c-9198-4201-b9d4-30ca439bb13b\" (UID: \"4d52783c-9198-4201-b9d4-30ca439bb13b\") " Dec 05 19:56:06 crc kubenswrapper[4828]: I1205 19:56:06.914552 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d52783c-9198-4201-b9d4-30ca439bb13b-catalog-content\") pod \"4d52783c-9198-4201-b9d4-30ca439bb13b\" (UID: \"4d52783c-9198-4201-b9d4-30ca439bb13b\") " Dec 05 19:56:06 crc kubenswrapper[4828]: I1205 19:56:06.915457 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d52783c-9198-4201-b9d4-30ca439bb13b-utilities" (OuterVolumeSpecName: "utilities") pod "4d52783c-9198-4201-b9d4-30ca439bb13b" (UID: "4d52783c-9198-4201-b9d4-30ca439bb13b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:56:06 crc kubenswrapper[4828]: I1205 19:56:06.925664 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d52783c-9198-4201-b9d4-30ca439bb13b-kube-api-access-f976z" (OuterVolumeSpecName: "kube-api-access-f976z") pod "4d52783c-9198-4201-b9d4-30ca439bb13b" (UID: "4d52783c-9198-4201-b9d4-30ca439bb13b"). InnerVolumeSpecName "kube-api-access-f976z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:56:07 crc kubenswrapper[4828]: I1205 19:56:07.011977 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d52783c-9198-4201-b9d4-30ca439bb13b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d52783c-9198-4201-b9d4-30ca439bb13b" (UID: "4d52783c-9198-4201-b9d4-30ca439bb13b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:56:07 crc kubenswrapper[4828]: I1205 19:56:07.017007 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d52783c-9198-4201-b9d4-30ca439bb13b-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 19:56:07 crc kubenswrapper[4828]: I1205 19:56:07.017055 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f976z\" (UniqueName: \"kubernetes.io/projected/4d52783c-9198-4201-b9d4-30ca439bb13b-kube-api-access-f976z\") on node \"crc\" DevicePath \"\"" Dec 05 19:56:07 crc kubenswrapper[4828]: I1205 19:56:07.017069 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d52783c-9198-4201-b9d4-30ca439bb13b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 19:56:07 crc kubenswrapper[4828]: I1205 19:56:07.246726 4828 generic.go:334] "Generic (PLEG): container finished" podID="4d52783c-9198-4201-b9d4-30ca439bb13b" containerID="7edc8212b5e9fdfef00a1780915f328c93bcec38451ae61d7cf6e50654ca8bc0" exitCode=0 Dec 05 19:56:07 crc kubenswrapper[4828]: I1205 19:56:07.246783 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzhq7" Dec 05 19:56:07 crc kubenswrapper[4828]: I1205 19:56:07.246783 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzhq7" event={"ID":"4d52783c-9198-4201-b9d4-30ca439bb13b","Type":"ContainerDied","Data":"7edc8212b5e9fdfef00a1780915f328c93bcec38451ae61d7cf6e50654ca8bc0"} Dec 05 19:56:07 crc kubenswrapper[4828]: I1205 19:56:07.246876 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzhq7" event={"ID":"4d52783c-9198-4201-b9d4-30ca439bb13b","Type":"ContainerDied","Data":"f321c24b7994aca66342177ec0d3c65d62052d1f36584fb64ece9b3bfb9ae21a"} Dec 05 19:56:07 crc kubenswrapper[4828]: I1205 19:56:07.246909 4828 scope.go:117] "RemoveContainer" containerID="7edc8212b5e9fdfef00a1780915f328c93bcec38451ae61d7cf6e50654ca8bc0" Dec 05 19:56:07 crc kubenswrapper[4828]: I1205 19:56:07.292850 4828 scope.go:117] "RemoveContainer" containerID="25c0d74b898da69d194aa60c7828d48e844ff7637433dcc41772f7dd5c1e802f" Dec 05 19:56:07 crc kubenswrapper[4828]: I1205 19:56:07.294584 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fzhq7"] Dec 05 19:56:07 crc kubenswrapper[4828]: I1205 19:56:07.305582 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fzhq7"] Dec 05 19:56:07 crc kubenswrapper[4828]: I1205 19:56:07.317003 4828 scope.go:117] "RemoveContainer" containerID="fdcb79a7770f153e8da6abf17259a8363249126f1bdd3b88e010070c50330388" Dec 05 19:56:07 crc kubenswrapper[4828]: I1205 19:56:07.362756 4828 scope.go:117] "RemoveContainer" containerID="7edc8212b5e9fdfef00a1780915f328c93bcec38451ae61d7cf6e50654ca8bc0" Dec 05 19:56:07 crc kubenswrapper[4828]: E1205 19:56:07.363206 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7edc8212b5e9fdfef00a1780915f328c93bcec38451ae61d7cf6e50654ca8bc0\": container with ID starting with 7edc8212b5e9fdfef00a1780915f328c93bcec38451ae61d7cf6e50654ca8bc0 not found: ID does not exist" containerID="7edc8212b5e9fdfef00a1780915f328c93bcec38451ae61d7cf6e50654ca8bc0" Dec 05 19:56:07 crc kubenswrapper[4828]: I1205 19:56:07.363234 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7edc8212b5e9fdfef00a1780915f328c93bcec38451ae61d7cf6e50654ca8bc0"} err="failed to get container status \"7edc8212b5e9fdfef00a1780915f328c93bcec38451ae61d7cf6e50654ca8bc0\": rpc error: code = NotFound desc = could not find container \"7edc8212b5e9fdfef00a1780915f328c93bcec38451ae61d7cf6e50654ca8bc0\": container with ID starting with 7edc8212b5e9fdfef00a1780915f328c93bcec38451ae61d7cf6e50654ca8bc0 not found: ID does not exist" Dec 05 19:56:07 crc kubenswrapper[4828]: I1205 19:56:07.363254 4828 scope.go:117] "RemoveContainer" containerID="25c0d74b898da69d194aa60c7828d48e844ff7637433dcc41772f7dd5c1e802f" Dec 05 19:56:07 crc kubenswrapper[4828]: E1205 19:56:07.363598 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25c0d74b898da69d194aa60c7828d48e844ff7637433dcc41772f7dd5c1e802f\": container with ID starting with 25c0d74b898da69d194aa60c7828d48e844ff7637433dcc41772f7dd5c1e802f not found: ID does not exist" containerID="25c0d74b898da69d194aa60c7828d48e844ff7637433dcc41772f7dd5c1e802f" Dec 05 19:56:07 crc kubenswrapper[4828]: I1205 19:56:07.363646 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25c0d74b898da69d194aa60c7828d48e844ff7637433dcc41772f7dd5c1e802f"} err="failed to get container status \"25c0d74b898da69d194aa60c7828d48e844ff7637433dcc41772f7dd5c1e802f\": rpc error: code = NotFound desc = could not find container \"25c0d74b898da69d194aa60c7828d48e844ff7637433dcc41772f7dd5c1e802f\": container with ID starting with 25c0d74b898da69d194aa60c7828d48e844ff7637433dcc41772f7dd5c1e802f not found: ID does not exist" Dec 05 19:56:07 crc kubenswrapper[4828]: I1205 19:56:07.363680 4828 scope.go:117] "RemoveContainer" containerID="fdcb79a7770f153e8da6abf17259a8363249126f1bdd3b88e010070c50330388" Dec 05 19:56:07 crc kubenswrapper[4828]: E1205 19:56:07.364127 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdcb79a7770f153e8da6abf17259a8363249126f1bdd3b88e010070c50330388\": container with ID starting with fdcb79a7770f153e8da6abf17259a8363249126f1bdd3b88e010070c50330388 not found: ID does not exist" containerID="fdcb79a7770f153e8da6abf17259a8363249126f1bdd3b88e010070c50330388" Dec 05 19:56:07 crc kubenswrapper[4828]: I1205 19:56:07.364196 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdcb79a7770f153e8da6abf17259a8363249126f1bdd3b88e010070c50330388"} err="failed to get container status \"fdcb79a7770f153e8da6abf17259a8363249126f1bdd3b88e010070c50330388\": rpc error: code = NotFound desc = could not find container \"fdcb79a7770f153e8da6abf17259a8363249126f1bdd3b88e010070c50330388\": container with ID starting with fdcb79a7770f153e8da6abf17259a8363249126f1bdd3b88e010070c50330388 not found: ID does not exist" Dec 05 19:56:08 crc kubenswrapper[4828]: I1205 19:56:08.456639 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d52783c-9198-4201-b9d4-30ca439bb13b" path="/var/lib/kubelet/pods/4d52783c-9198-4201-b9d4-30ca439bb13b/volumes" Dec 05 19:56:09 crc kubenswrapper[4828]: I1205 19:56:09.446300 4828 scope.go:117] "RemoveContainer" containerID="c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5" Dec 05 19:56:09 crc kubenswrapper[4828]: E1205 19:56:09.446559 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:56:16 crc kubenswrapper[4828]: I1205 19:56:16.411806 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mbh5n"] Dec 05 19:56:16 crc kubenswrapper[4828]: E1205 19:56:16.412655 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d52783c-9198-4201-b9d4-30ca439bb13b" containerName="extract-utilities" Dec 05 19:56:16 crc kubenswrapper[4828]: I1205 19:56:16.412670 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d52783c-9198-4201-b9d4-30ca439bb13b" containerName="extract-utilities" Dec 05 19:56:16 crc kubenswrapper[4828]: E1205 19:56:16.412688 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d52783c-9198-4201-b9d4-30ca439bb13b" containerName="registry-server" Dec 05 19:56:16 crc kubenswrapper[4828]: I1205 19:56:16.412696 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d52783c-9198-4201-b9d4-30ca439bb13b" containerName="registry-server" Dec 05 19:56:16 crc kubenswrapper[4828]: E1205 19:56:16.412716 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d52783c-9198-4201-b9d4-30ca439bb13b" containerName="extract-content" Dec 05 19:56:16 crc kubenswrapper[4828]: I1205 19:56:16.412724 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d52783c-9198-4201-b9d4-30ca439bb13b" containerName="extract-content" Dec 05 19:56:16 crc kubenswrapper[4828]: I1205 19:56:16.412931 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d52783c-9198-4201-b9d4-30ca439bb13b" containerName="registry-server" Dec 05 19:56:16 crc kubenswrapper[4828]: I1205 19:56:16.414336 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mbh5n" Dec 05 19:56:16 crc kubenswrapper[4828]: I1205 19:56:16.427297 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mbh5n"] Dec 05 19:56:16 crc kubenswrapper[4828]: I1205 19:56:16.507579 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb27a893-078c-4df6-96ab-fecc9e3b1a62-catalog-content\") pod \"redhat-marketplace-mbh5n\" (UID: \"bb27a893-078c-4df6-96ab-fecc9e3b1a62\") " pod="openshift-marketplace/redhat-marketplace-mbh5n" Dec 05 19:56:16 crc kubenswrapper[4828]: I1205 19:56:16.507997 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpqqq\" (UniqueName: \"kubernetes.io/projected/bb27a893-078c-4df6-96ab-fecc9e3b1a62-kube-api-access-tpqqq\") pod \"redhat-marketplace-mbh5n\" (UID: \"bb27a893-078c-4df6-96ab-fecc9e3b1a62\") " pod="openshift-marketplace/redhat-marketplace-mbh5n" Dec 05 19:56:16 crc kubenswrapper[4828]: I1205 19:56:16.508061 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb27a893-078c-4df6-96ab-fecc9e3b1a62-utilities\") pod \"redhat-marketplace-mbh5n\" (UID: \"bb27a893-078c-4df6-96ab-fecc9e3b1a62\") " pod="openshift-marketplace/redhat-marketplace-mbh5n" Dec 05 19:56:16 crc kubenswrapper[4828]: I1205 19:56:16.609473 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpqqq\" (UniqueName: \"kubernetes.io/projected/bb27a893-078c-4df6-96ab-fecc9e3b1a62-kube-api-access-tpqqq\") pod \"redhat-marketplace-mbh5n\" (UID: \"bb27a893-078c-4df6-96ab-fecc9e3b1a62\") " pod="openshift-marketplace/redhat-marketplace-mbh5n" Dec 05 19:56:16 crc kubenswrapper[4828]: I1205 19:56:16.609592 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb27a893-078c-4df6-96ab-fecc9e3b1a62-utilities\") pod \"redhat-marketplace-mbh5n\" (UID: \"bb27a893-078c-4df6-96ab-fecc9e3b1a62\") " pod="openshift-marketplace/redhat-marketplace-mbh5n" Dec 05 19:56:16 crc kubenswrapper[4828]: I1205 19:56:16.609782 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb27a893-078c-4df6-96ab-fecc9e3b1a62-catalog-content\") pod \"redhat-marketplace-mbh5n\" (UID: \"bb27a893-078c-4df6-96ab-fecc9e3b1a62\") " pod="openshift-marketplace/redhat-marketplace-mbh5n" Dec 05 19:56:16 crc kubenswrapper[4828]: I1205 19:56:16.610284 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb27a893-078c-4df6-96ab-fecc9e3b1a62-utilities\") pod \"redhat-marketplace-mbh5n\" (UID: \"bb27a893-078c-4df6-96ab-fecc9e3b1a62\") " pod="openshift-marketplace/redhat-marketplace-mbh5n" Dec 05 19:56:16 crc kubenswrapper[4828]: I1205 19:56:16.610341 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb27a893-078c-4df6-96ab-fecc9e3b1a62-catalog-content\") pod \"redhat-marketplace-mbh5n\" (UID: \"bb27a893-078c-4df6-96ab-fecc9e3b1a62\") " pod="openshift-marketplace/redhat-marketplace-mbh5n" Dec 05 19:56:16 crc kubenswrapper[4828]: I1205 19:56:16.628049 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpqqq\" (UniqueName: \"kubernetes.io/projected/bb27a893-078c-4df6-96ab-fecc9e3b1a62-kube-api-access-tpqqq\") pod \"redhat-marketplace-mbh5n\" (UID: \"bb27a893-078c-4df6-96ab-fecc9e3b1a62\") " pod="openshift-marketplace/redhat-marketplace-mbh5n" Dec 05 19:56:16 crc kubenswrapper[4828]: I1205 19:56:16.735123 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mbh5n" Dec 05 19:56:17 crc kubenswrapper[4828]: I1205 19:56:17.288490 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mbh5n"] Dec 05 19:56:17 crc kubenswrapper[4828]: I1205 19:56:17.334989 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mbh5n" event={"ID":"bb27a893-078c-4df6-96ab-fecc9e3b1a62","Type":"ContainerStarted","Data":"c139fade6b31b5fa962b46006daa1e6fbb948ef9ba360de4aa77678e531b8dc1"} Dec 05 19:56:18 crc kubenswrapper[4828]: I1205 19:56:18.345561 4828 generic.go:334] "Generic (PLEG): container finished" podID="bb27a893-078c-4df6-96ab-fecc9e3b1a62" containerID="8226975d95c18f556e2e01f244353974c2a178ae7bd4035951a776677d70ef7b" exitCode=0 Dec 05 19:56:18 crc kubenswrapper[4828]: I1205 19:56:18.345695 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mbh5n" event={"ID":"bb27a893-078c-4df6-96ab-fecc9e3b1a62","Type":"ContainerDied","Data":"8226975d95c18f556e2e01f244353974c2a178ae7bd4035951a776677d70ef7b"} Dec 05 19:56:19 crc kubenswrapper[4828]: I1205 19:56:19.356193 4828 generic.go:334] "Generic (PLEG): container finished" podID="bb27a893-078c-4df6-96ab-fecc9e3b1a62" containerID="2927054a2f7d24111271fd6523b53aa87e7e4aa38aec262eb75e5fa8276fd7f3" exitCode=0 Dec 05 19:56:19 crc kubenswrapper[4828]: I1205 19:56:19.356546 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mbh5n" event={"ID":"bb27a893-078c-4df6-96ab-fecc9e3b1a62","Type":"ContainerDied","Data":"2927054a2f7d24111271fd6523b53aa87e7e4aa38aec262eb75e5fa8276fd7f3"} Dec 05 19:56:20 crc kubenswrapper[4828]: I1205 19:56:20.381448 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mbh5n" event={"ID":"bb27a893-078c-4df6-96ab-fecc9e3b1a62","Type":"ContainerStarted","Data":"dd8f00adcabd1adeacbad38321e55134ceffae72c22a7d7f2697406d3f6e3608"} Dec 05 19:56:20 crc kubenswrapper[4828]: I1205 19:56:20.405909 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mbh5n" podStartSLOduration=2.936005258 podStartE2EDuration="4.405889977s" podCreationTimestamp="2025-12-05 19:56:16 +0000 UTC" firstStartedPulling="2025-12-05 19:56:18.34750607 +0000 UTC m=+3156.242728376" lastFinishedPulling="2025-12-05 19:56:19.817390789 +0000 UTC m=+3157.712613095" observedRunningTime="2025-12-05 19:56:20.402129946 +0000 UTC m=+3158.297352272" watchObservedRunningTime="2025-12-05 19:56:20.405889977 +0000 UTC m=+3158.301112283" Dec 05 19:56:20 crc kubenswrapper[4828]: I1205 19:56:20.446462 4828 scope.go:117] "RemoveContainer" containerID="e9977cd846559568215f863e86eb79e64661995a673b7472e5d29d74ccee6b8c" Dec 05 19:56:20 crc kubenswrapper[4828]: E1205 19:56:20.446711 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:56:21 crc kubenswrapper[4828]: I1205 19:56:21.446882 4828 scope.go:117] "RemoveContainer" containerID="c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5" Dec 05 19:56:21 crc kubenswrapper[4828]: E1205 19:56:21.447865 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:56:26 crc kubenswrapper[4828]: I1205 19:56:26.736189 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mbh5n" Dec 05 19:56:26 crc kubenswrapper[4828]: I1205 19:56:26.736694 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mbh5n" Dec 05 19:56:26 crc kubenswrapper[4828]: I1205 19:56:26.788164 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mbh5n" Dec 05 19:56:27 crc kubenswrapper[4828]: I1205 19:56:27.501218 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mbh5n" Dec 05 19:56:27 crc kubenswrapper[4828]: I1205 19:56:27.562705 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mbh5n"] Dec 05 19:56:29 crc kubenswrapper[4828]: I1205 19:56:29.453548 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mbh5n" podUID="bb27a893-078c-4df6-96ab-fecc9e3b1a62" containerName="registry-server" containerID="cri-o://dd8f00adcabd1adeacbad38321e55134ceffae72c22a7d7f2697406d3f6e3608" gracePeriod=2 Dec 05 19:56:29 crc kubenswrapper[4828]: I1205 19:56:29.977340 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mbh5n" Dec 05 19:56:30 crc kubenswrapper[4828]: I1205 19:56:30.012986 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb27a893-078c-4df6-96ab-fecc9e3b1a62-catalog-content\") pod \"bb27a893-078c-4df6-96ab-fecc9e3b1a62\" (UID: \"bb27a893-078c-4df6-96ab-fecc9e3b1a62\") " Dec 05 19:56:30 crc kubenswrapper[4828]: I1205 19:56:30.013116 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpqqq\" (UniqueName: \"kubernetes.io/projected/bb27a893-078c-4df6-96ab-fecc9e3b1a62-kube-api-access-tpqqq\") pod \"bb27a893-078c-4df6-96ab-fecc9e3b1a62\" (UID: \"bb27a893-078c-4df6-96ab-fecc9e3b1a62\") " Dec 05 19:56:30 crc kubenswrapper[4828]: I1205 19:56:30.013212 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb27a893-078c-4df6-96ab-fecc9e3b1a62-utilities\") pod \"bb27a893-078c-4df6-96ab-fecc9e3b1a62\" (UID: \"bb27a893-078c-4df6-96ab-fecc9e3b1a62\") " Dec 05 19:56:30 crc kubenswrapper[4828]: I1205 19:56:30.014092 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb27a893-078c-4df6-96ab-fecc9e3b1a62-utilities" (OuterVolumeSpecName: "utilities") pod "bb27a893-078c-4df6-96ab-fecc9e3b1a62" (UID: "bb27a893-078c-4df6-96ab-fecc9e3b1a62"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:56:30 crc kubenswrapper[4828]: I1205 19:56:30.019142 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb27a893-078c-4df6-96ab-fecc9e3b1a62-kube-api-access-tpqqq" (OuterVolumeSpecName: "kube-api-access-tpqqq") pod "bb27a893-078c-4df6-96ab-fecc9e3b1a62" (UID: "bb27a893-078c-4df6-96ab-fecc9e3b1a62"). InnerVolumeSpecName "kube-api-access-tpqqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 19:56:30 crc kubenswrapper[4828]: I1205 19:56:30.036732 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb27a893-078c-4df6-96ab-fecc9e3b1a62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb27a893-078c-4df6-96ab-fecc9e3b1a62" (UID: "bb27a893-078c-4df6-96ab-fecc9e3b1a62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 19:56:30 crc kubenswrapper[4828]: I1205 19:56:30.114837 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb27a893-078c-4df6-96ab-fecc9e3b1a62-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 19:56:30 crc kubenswrapper[4828]: I1205 19:56:30.114875 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpqqq\" (UniqueName: \"kubernetes.io/projected/bb27a893-078c-4df6-96ab-fecc9e3b1a62-kube-api-access-tpqqq\") on node \"crc\" DevicePath \"\"" Dec 05 19:56:30 crc kubenswrapper[4828]: I1205 19:56:30.114893 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb27a893-078c-4df6-96ab-fecc9e3b1a62-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 19:56:30 crc kubenswrapper[4828]: I1205 19:56:30.472022 4828 generic.go:334] "Generic (PLEG): container finished" podID="bb27a893-078c-4df6-96ab-fecc9e3b1a62" containerID="dd8f00adcabd1adeacbad38321e55134ceffae72c22a7d7f2697406d3f6e3608" exitCode=0 Dec 05 19:56:30 crc kubenswrapper[4828]: I1205 19:56:30.472068 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mbh5n" event={"ID":"bb27a893-078c-4df6-96ab-fecc9e3b1a62","Type":"ContainerDied","Data":"dd8f00adcabd1adeacbad38321e55134ceffae72c22a7d7f2697406d3f6e3608"} Dec 05 19:56:30 crc kubenswrapper[4828]: I1205 19:56:30.472124 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mbh5n" event={"ID":"bb27a893-078c-4df6-96ab-fecc9e3b1a62","Type":"ContainerDied","Data":"c139fade6b31b5fa962b46006daa1e6fbb948ef9ba360de4aa77678e531b8dc1"} Dec 05 19:56:30 crc kubenswrapper[4828]: I1205 19:56:30.472123 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mbh5n" Dec 05 19:56:30 crc kubenswrapper[4828]: I1205 19:56:30.472143 4828 scope.go:117] "RemoveContainer" containerID="dd8f00adcabd1adeacbad38321e55134ceffae72c22a7d7f2697406d3f6e3608" Dec 05 19:56:30 crc kubenswrapper[4828]: I1205 19:56:30.499736 4828 scope.go:117] "RemoveContainer" containerID="2927054a2f7d24111271fd6523b53aa87e7e4aa38aec262eb75e5fa8276fd7f3" Dec 05 19:56:30 crc kubenswrapper[4828]: I1205 19:56:30.513044 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mbh5n"] Dec 05 19:56:30 crc kubenswrapper[4828]: I1205 19:56:30.521207 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mbh5n"] Dec 05 19:56:30 crc kubenswrapper[4828]: I1205 19:56:30.521447 4828 scope.go:117] "RemoveContainer" containerID="8226975d95c18f556e2e01f244353974c2a178ae7bd4035951a776677d70ef7b" Dec 05 19:56:30 crc kubenswrapper[4828]: I1205 19:56:30.575451 4828 scope.go:117] "RemoveContainer" containerID="dd8f00adcabd1adeacbad38321e55134ceffae72c22a7d7f2697406d3f6e3608" Dec 05 19:56:30 crc kubenswrapper[4828]: E1205 19:56:30.575975 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd8f00adcabd1adeacbad38321e55134ceffae72c22a7d7f2697406d3f6e3608\": container with ID starting with dd8f00adcabd1adeacbad38321e55134ceffae72c22a7d7f2697406d3f6e3608 not found: ID does not exist" containerID="dd8f00adcabd1adeacbad38321e55134ceffae72c22a7d7f2697406d3f6e3608" Dec 05 19:56:30 crc kubenswrapper[4828]: I1205 19:56:30.576006 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd8f00adcabd1adeacbad38321e55134ceffae72c22a7d7f2697406d3f6e3608"} err="failed to get container status \"dd8f00adcabd1adeacbad38321e55134ceffae72c22a7d7f2697406d3f6e3608\": rpc error: code = NotFound desc = could not find container \"dd8f00adcabd1adeacbad38321e55134ceffae72c22a7d7f2697406d3f6e3608\": container with ID starting with dd8f00adcabd1adeacbad38321e55134ceffae72c22a7d7f2697406d3f6e3608 not found: ID does not exist" Dec 05 19:56:30 crc kubenswrapper[4828]: I1205 19:56:30.576026 4828 scope.go:117] "RemoveContainer" containerID="2927054a2f7d24111271fd6523b53aa87e7e4aa38aec262eb75e5fa8276fd7f3" Dec 05 19:56:30 crc kubenswrapper[4828]: E1205 19:56:30.576420 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2927054a2f7d24111271fd6523b53aa87e7e4aa38aec262eb75e5fa8276fd7f3\": container with ID starting with 2927054a2f7d24111271fd6523b53aa87e7e4aa38aec262eb75e5fa8276fd7f3 not found: ID does not exist" containerID="2927054a2f7d24111271fd6523b53aa87e7e4aa38aec262eb75e5fa8276fd7f3" Dec 05 19:56:30 crc kubenswrapper[4828]: I1205 19:56:30.576465 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2927054a2f7d24111271fd6523b53aa87e7e4aa38aec262eb75e5fa8276fd7f3"} err="failed to get container status \"2927054a2f7d24111271fd6523b53aa87e7e4aa38aec262eb75e5fa8276fd7f3\": rpc error: code = NotFound desc = could not find container \"2927054a2f7d24111271fd6523b53aa87e7e4aa38aec262eb75e5fa8276fd7f3\": container with ID starting with 2927054a2f7d24111271fd6523b53aa87e7e4aa38aec262eb75e5fa8276fd7f3 not found: ID does not exist" Dec 05 19:56:30 crc kubenswrapper[4828]: I1205 19:56:30.576501 4828 scope.go:117] "RemoveContainer" containerID="8226975d95c18f556e2e01f244353974c2a178ae7bd4035951a776677d70ef7b" Dec 05 19:56:30 crc kubenswrapper[4828]: E1205 19:56:30.577233 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8226975d95c18f556e2e01f244353974c2a178ae7bd4035951a776677d70ef7b\": container with ID starting with 8226975d95c18f556e2e01f244353974c2a178ae7bd4035951a776677d70ef7b not found: ID does not exist" containerID="8226975d95c18f556e2e01f244353974c2a178ae7bd4035951a776677d70ef7b" Dec 05 19:56:30 crc kubenswrapper[4828]: I1205 19:56:30.577264 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8226975d95c18f556e2e01f244353974c2a178ae7bd4035951a776677d70ef7b"} err="failed to get container status \"8226975d95c18f556e2e01f244353974c2a178ae7bd4035951a776677d70ef7b\": rpc error: code = NotFound desc = could not find container \"8226975d95c18f556e2e01f244353974c2a178ae7bd4035951a776677d70ef7b\": container with ID starting with 8226975d95c18f556e2e01f244353974c2a178ae7bd4035951a776677d70ef7b not found: ID does not exist" Dec 05 19:56:32 crc kubenswrapper[4828]: I1205 19:56:32.460896 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb27a893-078c-4df6-96ab-fecc9e3b1a62" path="/var/lib/kubelet/pods/bb27a893-078c-4df6-96ab-fecc9e3b1a62/volumes" Dec 05 19:56:33 crc kubenswrapper[4828]: I1205 19:56:33.449154 4828 scope.go:117] "RemoveContainer" containerID="e9977cd846559568215f863e86eb79e64661995a673b7472e5d29d74ccee6b8c" Dec 05 19:56:33 crc kubenswrapper[4828]: E1205 19:56:33.449640 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:56:36 crc kubenswrapper[4828]: I1205 19:56:36.447744 4828 scope.go:117] "RemoveContainer" containerID="c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5" Dec 05 19:56:36 crc kubenswrapper[4828]: E1205 19:56:36.448399 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:56:48 crc kubenswrapper[4828]: I1205 19:56:48.447216 4828 scope.go:117] "RemoveContainer" containerID="e9977cd846559568215f863e86eb79e64661995a673b7472e5d29d74ccee6b8c" Dec 05 19:56:48 crc kubenswrapper[4828]: E1205 19:56:48.448345 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:56:49 crc kubenswrapper[4828]: I1205 19:56:49.447294 4828 scope.go:117] "RemoveContainer" containerID="c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5" Dec 05 19:56:49 crc kubenswrapper[4828]: E1205 19:56:49.448007 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:56:59 crc kubenswrapper[4828]: I1205 19:56:59.446986 4828 scope.go:117] "RemoveContainer" containerID="e9977cd846559568215f863e86eb79e64661995a673b7472e5d29d74ccee6b8c" Dec 05 19:56:59 crc kubenswrapper[4828]: E1205 19:56:59.447681 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:57:00 crc kubenswrapper[4828]: I1205 19:57:00.446648 4828 scope.go:117] "RemoveContainer" containerID="c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5" Dec 05 19:57:00 crc kubenswrapper[4828]: E1205 19:57:00.447242 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:57:11 crc kubenswrapper[4828]: I1205 19:57:11.446894 4828 scope.go:117] "RemoveContainer" containerID="e9977cd846559568215f863e86eb79e64661995a673b7472e5d29d74ccee6b8c" Dec 05 19:57:11 crc kubenswrapper[4828]: I1205 19:57:11.900654 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" event={"ID":"03c4fc5d-6be1-47b4-9c39-7bb86046dafd","Type":"ContainerStarted","Data":"77ff2ec7eccf11eb6122ebe5e1a7ae6109a17dd78f66c6d40ffcb95bfd9d37f9"} Dec 05 19:57:11 crc kubenswrapper[4828]: I1205 19:57:11.901626 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:57:14 crc kubenswrapper[4828]: I1205 19:57:14.446938 4828 scope.go:117] "RemoveContainer" containerID="c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5" Dec 05 19:57:14 crc kubenswrapper[4828]: E1205 19:57:14.447734 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:57:25 crc kubenswrapper[4828]: I1205 19:57:25.128813 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:57:29 crc kubenswrapper[4828]: I1205 19:57:29.446848 4828 scope.go:117] "RemoveContainer" containerID="c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5" Dec 05 19:57:29 crc kubenswrapper[4828]: E1205 19:57:29.447527 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 19:57:44 crc kubenswrapper[4828]: I1205 19:57:44.446702 4828 scope.go:117] "RemoveContainer" containerID="c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5" Dec 05 19:57:45 crc kubenswrapper[4828]: I1205 19:57:45.217146 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerStarted","Data":"15759220b60d7ed708ce1d46839af98b1a9ed090b4ca259095bfd29b16663c22"} Dec 05 19:59:47 crc kubenswrapper[4828]: I1205 19:59:47.499942 4828 generic.go:334] "Generic (PLEG): container finished" podID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" containerID="77ff2ec7eccf11eb6122ebe5e1a7ae6109a17dd78f66c6d40ffcb95bfd9d37f9" exitCode=1 Dec 05 19:59:47 crc kubenswrapper[4828]: I1205 19:59:47.499990 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" event={"ID":"03c4fc5d-6be1-47b4-9c39-7bb86046dafd","Type":"ContainerDied","Data":"77ff2ec7eccf11eb6122ebe5e1a7ae6109a17dd78f66c6d40ffcb95bfd9d37f9"} Dec 05 19:59:47 crc kubenswrapper[4828]: I1205 19:59:47.501212 4828 scope.go:117] "RemoveContainer" containerID="e9977cd846559568215f863e86eb79e64661995a673b7472e5d29d74ccee6b8c" Dec 05 19:59:47 crc kubenswrapper[4828]: I1205 19:59:47.502033 4828 scope.go:117] "RemoveContainer" containerID="77ff2ec7eccf11eb6122ebe5e1a7ae6109a17dd78f66c6d40ffcb95bfd9d37f9" Dec 05 19:59:47 crc kubenswrapper[4828]: E1205 19:59:47.502361 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 19:59:55 crc kubenswrapper[4828]: I1205 19:59:55.117607 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 19:59:55 crc kubenswrapper[4828]: I1205 19:59:55.118767 4828 scope.go:117] "RemoveContainer" containerID="77ff2ec7eccf11eb6122ebe5e1a7ae6109a17dd78f66c6d40ffcb95bfd9d37f9" Dec 05 19:59:55 crc kubenswrapper[4828]: E1205 19:59:55.119079 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:00:00 crc kubenswrapper[4828]: I1205 20:00:00.149730 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416080-w89gx"] Dec 05 20:00:00 crc kubenswrapper[4828]: E1205 20:00:00.154212 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb27a893-078c-4df6-96ab-fecc9e3b1a62" containerName="registry-server" Dec 05 20:00:00 crc kubenswrapper[4828]: I1205 20:00:00.154385 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb27a893-078c-4df6-96ab-fecc9e3b1a62" containerName="registry-server" Dec 05 20:00:00 crc kubenswrapper[4828]: E1205 20:00:00.154505 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb27a893-078c-4df6-96ab-fecc9e3b1a62" containerName="extract-utilities" Dec 05 20:00:00 crc kubenswrapper[4828]: I1205 20:00:00.154591 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb27a893-078c-4df6-96ab-fecc9e3b1a62" containerName="extract-utilities" Dec 05 20:00:00 crc kubenswrapper[4828]: E1205 20:00:00.154685 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb27a893-078c-4df6-96ab-fecc9e3b1a62" containerName="extract-content" Dec 05 20:00:00 crc kubenswrapper[4828]: I1205 20:00:00.154759 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb27a893-078c-4df6-96ab-fecc9e3b1a62" containerName="extract-content" Dec 05 20:00:00 crc kubenswrapper[4828]: I1205 20:00:00.155209 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb27a893-078c-4df6-96ab-fecc9e3b1a62" containerName="registry-server" Dec 05 20:00:00 crc kubenswrapper[4828]: I1205 20:00:00.156669 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416080-w89gx" Dec 05 20:00:00 crc kubenswrapper[4828]: I1205 20:00:00.160274 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 20:00:00 crc kubenswrapper[4828]: I1205 20:00:00.160965 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416080-w89gx"] Dec 05 20:00:00 crc kubenswrapper[4828]: I1205 20:00:00.161301 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 20:00:00 crc kubenswrapper[4828]: I1205 20:00:00.250698 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f21d9c1-122c-4cf5-ad0c-a73393353ea0-config-volume\") pod \"collect-profiles-29416080-w89gx\" (UID: \"7f21d9c1-122c-4cf5-ad0c-a73393353ea0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416080-w89gx" Dec 05 20:00:00 crc kubenswrapper[4828]: I1205 20:00:00.250749 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrxzg\" (UniqueName: \"kubernetes.io/projected/7f21d9c1-122c-4cf5-ad0c-a73393353ea0-kube-api-access-mrxzg\") pod \"collect-profiles-29416080-w89gx\" (UID: \"7f21d9c1-122c-4cf5-ad0c-a73393353ea0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416080-w89gx" Dec 05 20:00:00 crc kubenswrapper[4828]: I1205 20:00:00.250944 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f21d9c1-122c-4cf5-ad0c-a73393353ea0-secret-volume\") pod \"collect-profiles-29416080-w89gx\" (UID: \"7f21d9c1-122c-4cf5-ad0c-a73393353ea0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416080-w89gx" Dec 05 20:00:00 crc kubenswrapper[4828]: I1205 20:00:00.352563 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f21d9c1-122c-4cf5-ad0c-a73393353ea0-config-volume\") pod \"collect-profiles-29416080-w89gx\" (UID: \"7f21d9c1-122c-4cf5-ad0c-a73393353ea0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416080-w89gx" Dec 05 20:00:00 crc kubenswrapper[4828]: I1205 20:00:00.352605 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrxzg\" (UniqueName: \"kubernetes.io/projected/7f21d9c1-122c-4cf5-ad0c-a73393353ea0-kube-api-access-mrxzg\") pod \"collect-profiles-29416080-w89gx\" (UID: \"7f21d9c1-122c-4cf5-ad0c-a73393353ea0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416080-w89gx" Dec 05 20:00:00 crc kubenswrapper[4828]: I1205 20:00:00.352681 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f21d9c1-122c-4cf5-ad0c-a73393353ea0-secret-volume\") pod \"collect-profiles-29416080-w89gx\" (UID: \"7f21d9c1-122c-4cf5-ad0c-a73393353ea0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416080-w89gx" Dec 05 20:00:00 crc kubenswrapper[4828]: I1205 20:00:00.353707 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f21d9c1-122c-4cf5-ad0c-a73393353ea0-config-volume\") pod \"collect-profiles-29416080-w89gx\" (UID: \"7f21d9c1-122c-4cf5-ad0c-a73393353ea0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416080-w89gx" Dec 05 20:00:00 crc kubenswrapper[4828]: I1205 20:00:00.359517 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f21d9c1-122c-4cf5-ad0c-a73393353ea0-secret-volume\") pod \"collect-profiles-29416080-w89gx\" (UID: \"7f21d9c1-122c-4cf5-ad0c-a73393353ea0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416080-w89gx" Dec 05 20:00:00 crc kubenswrapper[4828]: I1205 20:00:00.369713 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrxzg\" (UniqueName: \"kubernetes.io/projected/7f21d9c1-122c-4cf5-ad0c-a73393353ea0-kube-api-access-mrxzg\") pod \"collect-profiles-29416080-w89gx\" (UID: \"7f21d9c1-122c-4cf5-ad0c-a73393353ea0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416080-w89gx" Dec 05 20:00:00 crc kubenswrapper[4828]: I1205 20:00:00.487624 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416080-w89gx" Dec 05 20:00:00 crc kubenswrapper[4828]: I1205 20:00:00.955451 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416080-w89gx"] Dec 05 20:00:01 crc kubenswrapper[4828]: I1205 20:00:01.643810 4828 generic.go:334] "Generic (PLEG): container finished" podID="7f21d9c1-122c-4cf5-ad0c-a73393353ea0" containerID="4b5bcd4cfeb609468eee4e82a8154dde8c6debfe6b9da347f7daadd08f071d37" exitCode=0 Dec 05 20:00:01 crc kubenswrapper[4828]: I1205 20:00:01.643924 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416080-w89gx" event={"ID":"7f21d9c1-122c-4cf5-ad0c-a73393353ea0","Type":"ContainerDied","Data":"4b5bcd4cfeb609468eee4e82a8154dde8c6debfe6b9da347f7daadd08f071d37"} Dec 05 20:00:01 crc kubenswrapper[4828]: I1205 20:00:01.644139 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416080-w89gx" event={"ID":"7f21d9c1-122c-4cf5-ad0c-a73393353ea0","Type":"ContainerStarted","Data":"1a2d3c9867f9b57a29ae3156ec6d45c1d1b85d13023fd0ba3498a22ad31bf8ae"} Dec 05 20:00:03 crc kubenswrapper[4828]: I1205 20:00:03.119534 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416080-w89gx" Dec 05 20:00:03 crc kubenswrapper[4828]: I1205 20:00:03.219262 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f21d9c1-122c-4cf5-ad0c-a73393353ea0-config-volume\") pod \"7f21d9c1-122c-4cf5-ad0c-a73393353ea0\" (UID: \"7f21d9c1-122c-4cf5-ad0c-a73393353ea0\") " Dec 05 20:00:03 crc kubenswrapper[4828]: I1205 20:00:03.219374 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrxzg\" (UniqueName: \"kubernetes.io/projected/7f21d9c1-122c-4cf5-ad0c-a73393353ea0-kube-api-access-mrxzg\") pod \"7f21d9c1-122c-4cf5-ad0c-a73393353ea0\" (UID: \"7f21d9c1-122c-4cf5-ad0c-a73393353ea0\") " Dec 05 20:00:03 crc kubenswrapper[4828]: I1205 20:00:03.219420 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f21d9c1-122c-4cf5-ad0c-a73393353ea0-secret-volume\") pod \"7f21d9c1-122c-4cf5-ad0c-a73393353ea0\" (UID: \"7f21d9c1-122c-4cf5-ad0c-a73393353ea0\") " Dec 05 20:00:03 crc kubenswrapper[4828]: I1205 20:00:03.220349 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f21d9c1-122c-4cf5-ad0c-a73393353ea0-config-volume" (OuterVolumeSpecName: "config-volume") pod "7f21d9c1-122c-4cf5-ad0c-a73393353ea0" (UID: "7f21d9c1-122c-4cf5-ad0c-a73393353ea0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 20:00:03 crc kubenswrapper[4828]: I1205 20:00:03.225494 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f21d9c1-122c-4cf5-ad0c-a73393353ea0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7f21d9c1-122c-4cf5-ad0c-a73393353ea0" (UID: "7f21d9c1-122c-4cf5-ad0c-a73393353ea0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 20:00:03 crc kubenswrapper[4828]: I1205 20:00:03.225551 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f21d9c1-122c-4cf5-ad0c-a73393353ea0-kube-api-access-mrxzg" (OuterVolumeSpecName: "kube-api-access-mrxzg") pod "7f21d9c1-122c-4cf5-ad0c-a73393353ea0" (UID: "7f21d9c1-122c-4cf5-ad0c-a73393353ea0"). InnerVolumeSpecName "kube-api-access-mrxzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 20:00:03 crc kubenswrapper[4828]: I1205 20:00:03.322380 4828 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f21d9c1-122c-4cf5-ad0c-a73393353ea0-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 20:00:03 crc kubenswrapper[4828]: I1205 20:00:03.322430 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrxzg\" (UniqueName: \"kubernetes.io/projected/7f21d9c1-122c-4cf5-ad0c-a73393353ea0-kube-api-access-mrxzg\") on node \"crc\" DevicePath \"\"" Dec 05 20:00:03 crc kubenswrapper[4828]: I1205 20:00:03.322446 4828 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f21d9c1-122c-4cf5-ad0c-a73393353ea0-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 20:00:03 crc kubenswrapper[4828]: I1205 20:00:03.662355 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416080-w89gx" event={"ID":"7f21d9c1-122c-4cf5-ad0c-a73393353ea0","Type":"ContainerDied","Data":"1a2d3c9867f9b57a29ae3156ec6d45c1d1b85d13023fd0ba3498a22ad31bf8ae"} Dec 05 20:00:03 crc kubenswrapper[4828]: I1205 20:00:03.662446 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a2d3c9867f9b57a29ae3156ec6d45c1d1b85d13023fd0ba3498a22ad31bf8ae" Dec 05 20:00:03 crc kubenswrapper[4828]: I1205 20:00:03.662656 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416080-w89gx" Dec 05 20:00:04 crc kubenswrapper[4828]: I1205 20:00:04.212616 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416035-wnqmj"] Dec 05 20:00:04 crc kubenswrapper[4828]: I1205 20:00:04.224197 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416035-wnqmj"] Dec 05 20:00:04 crc kubenswrapper[4828]: I1205 20:00:04.461317 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5542062-eec5-413d-a23a-8ac6b1338b0c" path="/var/lib/kubelet/pods/c5542062-eec5-413d-a23a-8ac6b1338b0c/volumes" Dec 05 20:00:05 crc kubenswrapper[4828]: I1205 20:00:05.117777 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 20:00:05 crc kubenswrapper[4828]: I1205 20:00:05.118873 4828 scope.go:117] "RemoveContainer" containerID="77ff2ec7eccf11eb6122ebe5e1a7ae6109a17dd78f66c6d40ffcb95bfd9d37f9" Dec 05 20:00:05 crc kubenswrapper[4828]: E1205 20:00:05.119148 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:00:05 crc kubenswrapper[4828]: I1205 20:00:05.259641 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 20:00:05 crc kubenswrapper[4828]: I1205 20:00:05.259730 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 20:00:17 crc kubenswrapper[4828]: I1205 20:00:17.447763 4828 scope.go:117] "RemoveContainer" containerID="77ff2ec7eccf11eb6122ebe5e1a7ae6109a17dd78f66c6d40ffcb95bfd9d37f9" Dec 05 20:00:17 crc kubenswrapper[4828]: E1205 20:00:17.449088 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:00:32 crc kubenswrapper[4828]: I1205 20:00:32.458870 4828 scope.go:117] "RemoveContainer" containerID="77ff2ec7eccf11eb6122ebe5e1a7ae6109a17dd78f66c6d40ffcb95bfd9d37f9" Dec 05 20:00:32 crc kubenswrapper[4828]: E1205 20:00:32.461302 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:00:34 crc kubenswrapper[4828]: I1205 20:00:34.752448 4828 scope.go:117] "RemoveContainer" containerID="6134daa365957c5e50da988d77f84d2f4b6f05839521653242e508d3d5882c48" Dec 05 20:00:35 crc kubenswrapper[4828]: I1205 20:00:35.259958 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 20:00:35 crc kubenswrapper[4828]: I1205 20:00:35.260376 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 20:00:44 crc kubenswrapper[4828]: I1205 20:00:44.447286 4828 scope.go:117] "RemoveContainer" containerID="77ff2ec7eccf11eb6122ebe5e1a7ae6109a17dd78f66c6d40ffcb95bfd9d37f9" Dec 05 20:00:44 crc kubenswrapper[4828]: E1205 20:00:44.447998 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:00:59 crc kubenswrapper[4828]: I1205 20:00:59.446383 4828 scope.go:117] "RemoveContainer" containerID="77ff2ec7eccf11eb6122ebe5e1a7ae6109a17dd78f66c6d40ffcb95bfd9d37f9" Dec 05 20:00:59 crc kubenswrapper[4828]: E1205 20:00:59.448115 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:01:00 crc kubenswrapper[4828]: I1205 20:01:00.173445 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29416081-mbbpn"] Dec 05 20:01:00 crc kubenswrapper[4828]: E1205 20:01:00.174469 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f21d9c1-122c-4cf5-ad0c-a73393353ea0" containerName="collect-profiles" Dec 05 20:01:00 crc kubenswrapper[4828]: I1205 20:01:00.174505 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f21d9c1-122c-4cf5-ad0c-a73393353ea0" containerName="collect-profiles" Dec 05 20:01:00 crc kubenswrapper[4828]: I1205 20:01:00.174921 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f21d9c1-122c-4cf5-ad0c-a73393353ea0" containerName="collect-profiles" Dec 05 20:01:00 crc kubenswrapper[4828]: I1205 20:01:00.176026 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416081-mbbpn" Dec 05 20:01:00 crc kubenswrapper[4828]: I1205 20:01:00.203175 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29416081-mbbpn"] Dec 05 20:01:00 crc kubenswrapper[4828]: I1205 20:01:00.313651 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d30b1521-4341-40f3-8952-8e0d03fc192b-fernet-keys\") pod \"keystone-cron-29416081-mbbpn\" (UID: \"d30b1521-4341-40f3-8952-8e0d03fc192b\") " pod="openstack/keystone-cron-29416081-mbbpn" Dec 05 20:01:00 crc kubenswrapper[4828]: I1205 20:01:00.313849 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d30b1521-4341-40f3-8952-8e0d03fc192b-combined-ca-bundle\") pod \"keystone-cron-29416081-mbbpn\" (UID: \"d30b1521-4341-40f3-8952-8e0d03fc192b\") " pod="openstack/keystone-cron-29416081-mbbpn" Dec 05 20:01:00 crc kubenswrapper[4828]: I1205 20:01:00.313926 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d42sd\" (UniqueName: \"kubernetes.io/projected/d30b1521-4341-40f3-8952-8e0d03fc192b-kube-api-access-d42sd\") pod \"keystone-cron-29416081-mbbpn\" (UID: \"d30b1521-4341-40f3-8952-8e0d03fc192b\") " pod="openstack/keystone-cron-29416081-mbbpn" Dec 05 20:01:00 crc kubenswrapper[4828]: I1205 20:01:00.313954 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d30b1521-4341-40f3-8952-8e0d03fc192b-config-data\") pod \"keystone-cron-29416081-mbbpn\" (UID: \"d30b1521-4341-40f3-8952-8e0d03fc192b\") " pod="openstack/keystone-cron-29416081-mbbpn" Dec 05 20:01:00 crc kubenswrapper[4828]: I1205 20:01:00.415403 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d30b1521-4341-40f3-8952-8e0d03fc192b-config-data\") pod \"keystone-cron-29416081-mbbpn\" (UID: \"d30b1521-4341-40f3-8952-8e0d03fc192b\") " pod="openstack/keystone-cron-29416081-mbbpn" Dec 05 20:01:00 crc kubenswrapper[4828]: I1205 20:01:00.415488 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d30b1521-4341-40f3-8952-8e0d03fc192b-fernet-keys\") pod \"keystone-cron-29416081-mbbpn\" (UID: \"d30b1521-4341-40f3-8952-8e0d03fc192b\") " pod="openstack/keystone-cron-29416081-mbbpn" Dec 05 20:01:00 crc kubenswrapper[4828]: I1205 20:01:00.415604 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d30b1521-4341-40f3-8952-8e0d03fc192b-combined-ca-bundle\") pod \"keystone-cron-29416081-mbbpn\" (UID: \"d30b1521-4341-40f3-8952-8e0d03fc192b\") " pod="openstack/keystone-cron-29416081-mbbpn" Dec 05 20:01:00 crc kubenswrapper[4828]: I1205 20:01:00.415699 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d42sd\" (UniqueName: \"kubernetes.io/projected/d30b1521-4341-40f3-8952-8e0d03fc192b-kube-api-access-d42sd\") pod \"keystone-cron-29416081-mbbpn\" (UID: \"d30b1521-4341-40f3-8952-8e0d03fc192b\") " pod="openstack/keystone-cron-29416081-mbbpn" Dec 05 20:01:00 crc kubenswrapper[4828]: I1205 20:01:00.422219 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d30b1521-4341-40f3-8952-8e0d03fc192b-config-data\") pod \"keystone-cron-29416081-mbbpn\" (UID: \"d30b1521-4341-40f3-8952-8e0d03fc192b\") " pod="openstack/keystone-cron-29416081-mbbpn" Dec 05 20:01:00 crc kubenswrapper[4828]: I1205 20:01:00.423225 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d30b1521-4341-40f3-8952-8e0d03fc192b-combined-ca-bundle\") pod \"keystone-cron-29416081-mbbpn\" (UID: \"d30b1521-4341-40f3-8952-8e0d03fc192b\") " pod="openstack/keystone-cron-29416081-mbbpn" Dec 05 20:01:00 crc kubenswrapper[4828]: I1205 20:01:00.423524 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d30b1521-4341-40f3-8952-8e0d03fc192b-fernet-keys\") pod \"keystone-cron-29416081-mbbpn\" (UID: \"d30b1521-4341-40f3-8952-8e0d03fc192b\") " pod="openstack/keystone-cron-29416081-mbbpn" Dec 05 20:01:00 crc kubenswrapper[4828]: I1205 20:01:00.432976 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d42sd\" (UniqueName: \"kubernetes.io/projected/d30b1521-4341-40f3-8952-8e0d03fc192b-kube-api-access-d42sd\") pod \"keystone-cron-29416081-mbbpn\" (UID: \"d30b1521-4341-40f3-8952-8e0d03fc192b\") " pod="openstack/keystone-cron-29416081-mbbpn" Dec 05 20:01:00 crc kubenswrapper[4828]: I1205 20:01:00.541028 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416081-mbbpn" Dec 05 20:01:00 crc kubenswrapper[4828]: I1205 20:01:00.979407 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29416081-mbbpn"] Dec 05 20:01:01 crc kubenswrapper[4828]: I1205 20:01:01.297110 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416081-mbbpn" event={"ID":"d30b1521-4341-40f3-8952-8e0d03fc192b","Type":"ContainerStarted","Data":"26439e12cb45a0d5f0db98e244e58f469037cd1e6b90c0064b0ec2dc9f6f5662"} Dec 05 20:01:01 crc kubenswrapper[4828]: I1205 20:01:01.297147 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416081-mbbpn" event={"ID":"d30b1521-4341-40f3-8952-8e0d03fc192b","Type":"ContainerStarted","Data":"66854e4ce9b08809f2b4b52a3bbae9731bfe9f0275f0842ea10cd2e504d168d5"} Dec 05 20:01:01 crc kubenswrapper[4828]: I1205 20:01:01.325425 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29416081-mbbpn" podStartSLOduration=1.325402346 podStartE2EDuration="1.325402346s" podCreationTimestamp="2025-12-05 20:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 20:01:01.319583689 +0000 UTC m=+3439.214806005" watchObservedRunningTime="2025-12-05 20:01:01.325402346 +0000 UTC m=+3439.220624682" Dec 05 20:01:03 crc kubenswrapper[4828]: I1205 20:01:03.313277 4828 generic.go:334] "Generic (PLEG): container finished" podID="d30b1521-4341-40f3-8952-8e0d03fc192b" containerID="26439e12cb45a0d5f0db98e244e58f469037cd1e6b90c0064b0ec2dc9f6f5662" exitCode=0 Dec 05 20:01:03 crc kubenswrapper[4828]: I1205 20:01:03.313317 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416081-mbbpn" event={"ID":"d30b1521-4341-40f3-8952-8e0d03fc192b","Type":"ContainerDied","Data":"26439e12cb45a0d5f0db98e244e58f469037cd1e6b90c0064b0ec2dc9f6f5662"} Dec 05 20:01:04 crc kubenswrapper[4828]: I1205 20:01:04.698963 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416081-mbbpn" Dec 05 20:01:04 crc kubenswrapper[4828]: I1205 20:01:04.810784 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d30b1521-4341-40f3-8952-8e0d03fc192b-combined-ca-bundle\") pod \"d30b1521-4341-40f3-8952-8e0d03fc192b\" (UID: \"d30b1521-4341-40f3-8952-8e0d03fc192b\") " Dec 05 20:01:04 crc kubenswrapper[4828]: I1205 20:01:04.810861 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d30b1521-4341-40f3-8952-8e0d03fc192b-fernet-keys\") pod \"d30b1521-4341-40f3-8952-8e0d03fc192b\" (UID: \"d30b1521-4341-40f3-8952-8e0d03fc192b\") " Dec 05 20:01:04 crc kubenswrapper[4828]: I1205 20:01:04.811055 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d42sd\" (UniqueName: \"kubernetes.io/projected/d30b1521-4341-40f3-8952-8e0d03fc192b-kube-api-access-d42sd\") pod \"d30b1521-4341-40f3-8952-8e0d03fc192b\" (UID: \"d30b1521-4341-40f3-8952-8e0d03fc192b\") " Dec 05 20:01:04 crc kubenswrapper[4828]: I1205 20:01:04.811153 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d30b1521-4341-40f3-8952-8e0d03fc192b-config-data\") pod \"d30b1521-4341-40f3-8952-8e0d03fc192b\" (UID: \"d30b1521-4341-40f3-8952-8e0d03fc192b\") " Dec 05 20:01:04 crc kubenswrapper[4828]: I1205 20:01:04.816936 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d30b1521-4341-40f3-8952-8e0d03fc192b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d30b1521-4341-40f3-8952-8e0d03fc192b" (UID: "d30b1521-4341-40f3-8952-8e0d03fc192b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 20:01:04 crc kubenswrapper[4828]: I1205 20:01:04.824126 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d30b1521-4341-40f3-8952-8e0d03fc192b-kube-api-access-d42sd" (OuterVolumeSpecName: "kube-api-access-d42sd") pod "d30b1521-4341-40f3-8952-8e0d03fc192b" (UID: "d30b1521-4341-40f3-8952-8e0d03fc192b"). InnerVolumeSpecName "kube-api-access-d42sd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 20:01:04 crc kubenswrapper[4828]: I1205 20:01:04.842449 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d30b1521-4341-40f3-8952-8e0d03fc192b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d30b1521-4341-40f3-8952-8e0d03fc192b" (UID: "d30b1521-4341-40f3-8952-8e0d03fc192b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 20:01:04 crc kubenswrapper[4828]: I1205 20:01:04.864066 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d30b1521-4341-40f3-8952-8e0d03fc192b-config-data" (OuterVolumeSpecName: "config-data") pod "d30b1521-4341-40f3-8952-8e0d03fc192b" (UID: "d30b1521-4341-40f3-8952-8e0d03fc192b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 20:01:04 crc kubenswrapper[4828]: I1205 20:01:04.913179 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d42sd\" (UniqueName: \"kubernetes.io/projected/d30b1521-4341-40f3-8952-8e0d03fc192b-kube-api-access-d42sd\") on node \"crc\" DevicePath \"\"" Dec 05 20:01:04 crc kubenswrapper[4828]: I1205 20:01:04.913213 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d30b1521-4341-40f3-8952-8e0d03fc192b-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 20:01:04 crc kubenswrapper[4828]: I1205 20:01:04.913242 4828 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d30b1521-4341-40f3-8952-8e0d03fc192b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 20:01:04 crc kubenswrapper[4828]: I1205 20:01:04.913253 4828 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d30b1521-4341-40f3-8952-8e0d03fc192b-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 05 20:01:05 crc kubenswrapper[4828]: I1205 20:01:05.259624 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 20:01:05 crc kubenswrapper[4828]: I1205 20:01:05.260019 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 20:01:05 crc kubenswrapper[4828]: I1205 20:01:05.260083 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" Dec 05 20:01:05 crc kubenswrapper[4828]: I1205 20:01:05.261004 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"15759220b60d7ed708ce1d46839af98b1a9ed090b4ca259095bfd29b16663c22"} pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 20:01:05 crc kubenswrapper[4828]: I1205 20:01:05.261084 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" containerID="cri-o://15759220b60d7ed708ce1d46839af98b1a9ed090b4ca259095bfd29b16663c22" gracePeriod=600 Dec 05 20:01:05 crc kubenswrapper[4828]: I1205 20:01:05.356894 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416081-mbbpn" event={"ID":"d30b1521-4341-40f3-8952-8e0d03fc192b","Type":"ContainerDied","Data":"66854e4ce9b08809f2b4b52a3bbae9731bfe9f0275f0842ea10cd2e504d168d5"} Dec 05 20:01:05 crc kubenswrapper[4828]: I1205 20:01:05.356941 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66854e4ce9b08809f2b4b52a3bbae9731bfe9f0275f0842ea10cd2e504d168d5" Dec 05 20:01:05 crc kubenswrapper[4828]: I1205 20:01:05.356962 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416081-mbbpn" Dec 05 20:01:06 crc kubenswrapper[4828]: I1205 20:01:06.366726 4828 generic.go:334] "Generic (PLEG): container finished" podID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerID="15759220b60d7ed708ce1d46839af98b1a9ed090b4ca259095bfd29b16663c22" exitCode=0 Dec 05 20:01:06 crc kubenswrapper[4828]: I1205 20:01:06.366816 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerDied","Data":"15759220b60d7ed708ce1d46839af98b1a9ed090b4ca259095bfd29b16663c22"} Dec 05 20:01:06 crc kubenswrapper[4828]: I1205 20:01:06.367086 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerStarted","Data":"da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f"} Dec 05 20:01:06 crc kubenswrapper[4828]: I1205 20:01:06.367125 4828 scope.go:117] "RemoveContainer" containerID="c53fe86346244c735dafafc92bb560536e9ebe6d927e91769068e759a4d288d5" Dec 05 20:01:12 crc kubenswrapper[4828]: I1205 20:01:12.458685 4828 scope.go:117] "RemoveContainer" containerID="77ff2ec7eccf11eb6122ebe5e1a7ae6109a17dd78f66c6d40ffcb95bfd9d37f9" Dec 05 20:01:12 crc kubenswrapper[4828]: E1205 20:01:12.459521 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:01:25 crc kubenswrapper[4828]: I1205 20:01:25.446783 4828 scope.go:117] "RemoveContainer" containerID="77ff2ec7eccf11eb6122ebe5e1a7ae6109a17dd78f66c6d40ffcb95bfd9d37f9" Dec 05 20:01:25 crc kubenswrapper[4828]: E1205 20:01:25.447489 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:01:38 crc kubenswrapper[4828]: I1205 20:01:38.447129 4828 scope.go:117] "RemoveContainer" containerID="77ff2ec7eccf11eb6122ebe5e1a7ae6109a17dd78f66c6d40ffcb95bfd9d37f9" Dec 05 20:01:38 crc kubenswrapper[4828]: E1205 20:01:38.448000 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:01:52 crc kubenswrapper[4828]: I1205 20:01:52.462662 4828 scope.go:117] "RemoveContainer" containerID="77ff2ec7eccf11eb6122ebe5e1a7ae6109a17dd78f66c6d40ffcb95bfd9d37f9" Dec 05 20:01:52 crc kubenswrapper[4828]: E1205 20:01:52.463645 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:01:57 crc kubenswrapper[4828]: I1205 20:01:57.524994 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9kh2s"] Dec 05 20:01:57 crc kubenswrapper[4828]: E1205 20:01:57.525710 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d30b1521-4341-40f3-8952-8e0d03fc192b" containerName="keystone-cron" Dec 05 20:01:57 crc kubenswrapper[4828]: I1205 20:01:57.525723 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="d30b1521-4341-40f3-8952-8e0d03fc192b" containerName="keystone-cron" Dec 05 20:01:57 crc kubenswrapper[4828]: I1205 20:01:57.525964 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="d30b1521-4341-40f3-8952-8e0d03fc192b" containerName="keystone-cron" Dec 05 20:01:57 crc kubenswrapper[4828]: I1205 20:01:57.528033 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9kh2s" Dec 05 20:01:57 crc kubenswrapper[4828]: I1205 20:01:57.539873 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9kh2s"] Dec 05 20:01:57 crc kubenswrapper[4828]: I1205 20:01:57.678610 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6jzm\" (UniqueName: \"kubernetes.io/projected/0511136b-8d75-446b-a636-c88924c77822-kube-api-access-p6jzm\") pod \"certified-operators-9kh2s\" (UID: \"0511136b-8d75-446b-a636-c88924c77822\") " pod="openshift-marketplace/certified-operators-9kh2s" Dec 05 20:01:57 crc kubenswrapper[4828]: I1205 20:01:57.678666 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0511136b-8d75-446b-a636-c88924c77822-catalog-content\") pod \"certified-operators-9kh2s\" (UID: \"0511136b-8d75-446b-a636-c88924c77822\") " pod="openshift-marketplace/certified-operators-9kh2s" Dec 05 20:01:57 crc kubenswrapper[4828]: I1205 20:01:57.679134 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0511136b-8d75-446b-a636-c88924c77822-utilities\") pod \"certified-operators-9kh2s\" (UID: \"0511136b-8d75-446b-a636-c88924c77822\") " pod="openshift-marketplace/certified-operators-9kh2s" Dec 05 20:01:57 crc kubenswrapper[4828]: I1205 20:01:57.781146 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0511136b-8d75-446b-a636-c88924c77822-utilities\") pod \"certified-operators-9kh2s\" (UID: \"0511136b-8d75-446b-a636-c88924c77822\") " pod="openshift-marketplace/certified-operators-9kh2s" Dec 05 20:01:57 crc kubenswrapper[4828]: I1205 20:01:57.781518 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6jzm\" (UniqueName: \"kubernetes.io/projected/0511136b-8d75-446b-a636-c88924c77822-kube-api-access-p6jzm\") pod \"certified-operators-9kh2s\" (UID: \"0511136b-8d75-446b-a636-c88924c77822\") " pod="openshift-marketplace/certified-operators-9kh2s" Dec 05 20:01:57 crc kubenswrapper[4828]: I1205 20:01:57.781538 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0511136b-8d75-446b-a636-c88924c77822-catalog-content\") pod \"certified-operators-9kh2s\" (UID: \"0511136b-8d75-446b-a636-c88924c77822\") " pod="openshift-marketplace/certified-operators-9kh2s" Dec 05 20:01:57 crc kubenswrapper[4828]: I1205 20:01:57.781677 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0511136b-8d75-446b-a636-c88924c77822-utilities\") pod \"certified-operators-9kh2s\" (UID: \"0511136b-8d75-446b-a636-c88924c77822\") " pod="openshift-marketplace/certified-operators-9kh2s" Dec 05 20:01:57 crc kubenswrapper[4828]: I1205 20:01:57.782010 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0511136b-8d75-446b-a636-c88924c77822-catalog-content\") pod \"certified-operators-9kh2s\" (UID: \"0511136b-8d75-446b-a636-c88924c77822\") " pod="openshift-marketplace/certified-operators-9kh2s" Dec 05 20:01:57 crc kubenswrapper[4828]: I1205 20:01:57.804172 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6jzm\" (UniqueName: \"kubernetes.io/projected/0511136b-8d75-446b-a636-c88924c77822-kube-api-access-p6jzm\") pod \"certified-operators-9kh2s\" (UID: \"0511136b-8d75-446b-a636-c88924c77822\") " pod="openshift-marketplace/certified-operators-9kh2s" Dec 05 20:01:57 crc kubenswrapper[4828]: I1205 20:01:57.857879 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9kh2s" Dec 05 20:01:58 crc kubenswrapper[4828]: I1205 20:01:58.336316 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9kh2s"] Dec 05 20:01:58 crc kubenswrapper[4828]: I1205 20:01:58.857349 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9kh2s" event={"ID":"0511136b-8d75-446b-a636-c88924c77822","Type":"ContainerStarted","Data":"cc0d51aaece86405ecdf3c725a40b3df451984a8c4116d1c1746e781bfcf97c2"} Dec 05 20:01:58 crc kubenswrapper[4828]: I1205 20:01:58.857677 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9kh2s" event={"ID":"0511136b-8d75-446b-a636-c88924c77822","Type":"ContainerStarted","Data":"548f163329f594bf9171f0a664ab3b8775c930dd126510979154bc6b1af47017"} Dec 05 20:01:59 crc kubenswrapper[4828]: I1205 20:01:59.879296 4828 generic.go:334] "Generic (PLEG): container finished" podID="0511136b-8d75-446b-a636-c88924c77822" containerID="cc0d51aaece86405ecdf3c725a40b3df451984a8c4116d1c1746e781bfcf97c2" exitCode=0 Dec 05 20:01:59 crc kubenswrapper[4828]: I1205 20:01:59.879388 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9kh2s" event={"ID":"0511136b-8d75-446b-a636-c88924c77822","Type":"ContainerDied","Data":"cc0d51aaece86405ecdf3c725a40b3df451984a8c4116d1c1746e781bfcf97c2"} Dec 05 20:01:59 crc kubenswrapper[4828]: I1205 20:01:59.881834 4828 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 20:02:00 crc kubenswrapper[4828]: I1205 20:02:00.891022 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9kh2s" event={"ID":"0511136b-8d75-446b-a636-c88924c77822","Type":"ContainerStarted","Data":"455d67f5274324180f76079bd74e7a8bbd3fccdb8ec8a7e93a9767313dcb5d9e"} Dec 05 20:02:01 crc kubenswrapper[4828]: I1205 20:02:01.902903 4828 generic.go:334] "Generic (PLEG): container finished" podID="0511136b-8d75-446b-a636-c88924c77822" containerID="455d67f5274324180f76079bd74e7a8bbd3fccdb8ec8a7e93a9767313dcb5d9e" exitCode=0 Dec 05 20:02:01 crc kubenswrapper[4828]: I1205 20:02:01.902946 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9kh2s" event={"ID":"0511136b-8d75-446b-a636-c88924c77822","Type":"ContainerDied","Data":"455d67f5274324180f76079bd74e7a8bbd3fccdb8ec8a7e93a9767313dcb5d9e"} Dec 05 20:02:03 crc kubenswrapper[4828]: I1205 20:02:03.928481 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9kh2s" event={"ID":"0511136b-8d75-446b-a636-c88924c77822","Type":"ContainerStarted","Data":"dd5ad025aa83f8dbbdec6649ab9047aaabdc0a08120c8493651b40f26444babb"} Dec 05 20:02:03 crc kubenswrapper[4828]: I1205 20:02:03.958677 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9kh2s" podStartSLOduration=3.736844271 podStartE2EDuration="6.958661464s" podCreationTimestamp="2025-12-05 20:01:57 +0000 UTC" firstStartedPulling="2025-12-05 20:01:59.881240238 +0000 UTC m=+3497.776462544" lastFinishedPulling="2025-12-05 20:02:03.103057421 +0000 UTC m=+3500.998279737" observedRunningTime="2025-12-05 20:02:03.957002809 +0000 UTC m=+3501.852225145" watchObservedRunningTime="2025-12-05 20:02:03.958661464 +0000 UTC m=+3501.853883770" Dec 05 20:02:07 crc kubenswrapper[4828]: I1205 20:02:07.447438 4828 scope.go:117] "RemoveContainer" containerID="77ff2ec7eccf11eb6122ebe5e1a7ae6109a17dd78f66c6d40ffcb95bfd9d37f9" Dec 05 20:02:07 crc kubenswrapper[4828]: E1205 20:02:07.448730 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:02:07 crc kubenswrapper[4828]: I1205 20:02:07.858269 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9kh2s" Dec 05 20:02:07 crc kubenswrapper[4828]: I1205 20:02:07.858347 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9kh2s" Dec 05 20:02:07 crc kubenswrapper[4828]: I1205 20:02:07.903573 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9kh2s" Dec 05 20:02:17 crc kubenswrapper[4828]: I1205 20:02:17.404801 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7c62q"] Dec 05 20:02:17 crc kubenswrapper[4828]: I1205 20:02:17.408059 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7c62q" Dec 05 20:02:17 crc kubenswrapper[4828]: I1205 20:02:17.418347 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7c62q"] Dec 05 20:02:17 crc kubenswrapper[4828]: I1205 20:02:17.551483 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rshf\" (UniqueName: \"kubernetes.io/projected/3eb764b0-a39d-4cf6-9600-4eadc5da383d-kube-api-access-9rshf\") pod \"community-operators-7c62q\" (UID: \"3eb764b0-a39d-4cf6-9600-4eadc5da383d\") " pod="openshift-marketplace/community-operators-7c62q" Dec 05 20:02:17 crc kubenswrapper[4828]: I1205 20:02:17.551557 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eb764b0-a39d-4cf6-9600-4eadc5da383d-utilities\") pod \"community-operators-7c62q\" (UID: \"3eb764b0-a39d-4cf6-9600-4eadc5da383d\") " pod="openshift-marketplace/community-operators-7c62q" Dec 05 20:02:17 crc kubenswrapper[4828]: I1205 20:02:17.551580 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eb764b0-a39d-4cf6-9600-4eadc5da383d-catalog-content\") pod \"community-operators-7c62q\" (UID: \"3eb764b0-a39d-4cf6-9600-4eadc5da383d\") " pod="openshift-marketplace/community-operators-7c62q" Dec 05 20:02:17 crc kubenswrapper[4828]: I1205 20:02:17.653656 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rshf\" (UniqueName: \"kubernetes.io/projected/3eb764b0-a39d-4cf6-9600-4eadc5da383d-kube-api-access-9rshf\") pod \"community-operators-7c62q\" (UID: \"3eb764b0-a39d-4cf6-9600-4eadc5da383d\") " pod="openshift-marketplace/community-operators-7c62q" Dec 05 20:02:17 crc kubenswrapper[4828]: I1205 20:02:17.653747 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eb764b0-a39d-4cf6-9600-4eadc5da383d-utilities\") pod \"community-operators-7c62q\" (UID: \"3eb764b0-a39d-4cf6-9600-4eadc5da383d\") " pod="openshift-marketplace/community-operators-7c62q" Dec 05 20:02:17 crc kubenswrapper[4828]: I1205 20:02:17.653774 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eb764b0-a39d-4cf6-9600-4eadc5da383d-catalog-content\") pod \"community-operators-7c62q\" (UID: \"3eb764b0-a39d-4cf6-9600-4eadc5da383d\") " pod="openshift-marketplace/community-operators-7c62q" Dec 05 20:02:17 crc kubenswrapper[4828]: I1205 20:02:17.654417 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eb764b0-a39d-4cf6-9600-4eadc5da383d-utilities\") pod \"community-operators-7c62q\" (UID: \"3eb764b0-a39d-4cf6-9600-4eadc5da383d\") " pod="openshift-marketplace/community-operators-7c62q" Dec 05 20:02:17 crc kubenswrapper[4828]: I1205 20:02:17.654483 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eb764b0-a39d-4cf6-9600-4eadc5da383d-catalog-content\") pod \"community-operators-7c62q\" (UID: \"3eb764b0-a39d-4cf6-9600-4eadc5da383d\") " pod="openshift-marketplace/community-operators-7c62q" Dec 05 20:02:17 crc kubenswrapper[4828]: I1205 20:02:17.676167 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rshf\" (UniqueName: \"kubernetes.io/projected/3eb764b0-a39d-4cf6-9600-4eadc5da383d-kube-api-access-9rshf\") pod \"community-operators-7c62q\" (UID: \"3eb764b0-a39d-4cf6-9600-4eadc5da383d\") " pod="openshift-marketplace/community-operators-7c62q" Dec 05 20:02:17 crc kubenswrapper[4828]: I1205 20:02:17.733449 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7c62q" Dec 05 20:02:17 crc kubenswrapper[4828]: I1205 20:02:17.916190 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9kh2s" Dec 05 20:02:18 crc kubenswrapper[4828]: I1205 20:02:18.309137 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7c62q"] Dec 05 20:02:18 crc kubenswrapper[4828]: W1205 20:02:18.318031 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3eb764b0_a39d_4cf6_9600_4eadc5da383d.slice/crio-406bb293ab28e823c6066c2bf950aa39525d99a8a744d7fe748bf3d2185511c9 WatchSource:0}: Error finding container 406bb293ab28e823c6066c2bf950aa39525d99a8a744d7fe748bf3d2185511c9: Status 404 returned error can't find the container with id 406bb293ab28e823c6066c2bf950aa39525d99a8a744d7fe748bf3d2185511c9 Dec 05 20:02:19 crc kubenswrapper[4828]: I1205 20:02:19.073278 4828 generic.go:334] "Generic (PLEG): container finished" podID="3eb764b0-a39d-4cf6-9600-4eadc5da383d" containerID="d251e6b2c784d25caf6570e8444f912603011b0363c929ff83bb1d3d74847ee4" exitCode=0 Dec 05 20:02:19 crc kubenswrapper[4828]: I1205 20:02:19.073382 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7c62q" event={"ID":"3eb764b0-a39d-4cf6-9600-4eadc5da383d","Type":"ContainerDied","Data":"d251e6b2c784d25caf6570e8444f912603011b0363c929ff83bb1d3d74847ee4"} Dec 05 20:02:19 crc kubenswrapper[4828]: I1205 20:02:19.073598 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7c62q" event={"ID":"3eb764b0-a39d-4cf6-9600-4eadc5da383d","Type":"ContainerStarted","Data":"406bb293ab28e823c6066c2bf950aa39525d99a8a744d7fe748bf3d2185511c9"} Dec 05 20:02:20 crc kubenswrapper[4828]: I1205 20:02:20.083522 4828 generic.go:334] "Generic (PLEG): container finished" podID="3eb764b0-a39d-4cf6-9600-4eadc5da383d" containerID="708842c699d5c61957a8989263433da42a1e079f5e2d26415377ae1c152017ed" exitCode=0 Dec 05 20:02:20 crc kubenswrapper[4828]: I1205 20:02:20.083628 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7c62q" event={"ID":"3eb764b0-a39d-4cf6-9600-4eadc5da383d","Type":"ContainerDied","Data":"708842c699d5c61957a8989263433da42a1e079f5e2d26415377ae1c152017ed"} Dec 05 20:02:20 crc kubenswrapper[4828]: I1205 20:02:20.204520 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9kh2s"] Dec 05 20:02:20 crc kubenswrapper[4828]: I1205 20:02:20.204914 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9kh2s" podUID="0511136b-8d75-446b-a636-c88924c77822" containerName="registry-server" containerID="cri-o://dd5ad025aa83f8dbbdec6649ab9047aaabdc0a08120c8493651b40f26444babb" gracePeriod=2 Dec 05 20:02:20 crc kubenswrapper[4828]: I1205 20:02:20.684394 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9kh2s" Dec 05 20:02:20 crc kubenswrapper[4828]: I1205 20:02:20.810301 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0511136b-8d75-446b-a636-c88924c77822-utilities\") pod \"0511136b-8d75-446b-a636-c88924c77822\" (UID: \"0511136b-8d75-446b-a636-c88924c77822\") " Dec 05 20:02:20 crc kubenswrapper[4828]: I1205 20:02:20.810556 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0511136b-8d75-446b-a636-c88924c77822-catalog-content\") pod \"0511136b-8d75-446b-a636-c88924c77822\" (UID: \"0511136b-8d75-446b-a636-c88924c77822\") " Dec 05 20:02:20 crc kubenswrapper[4828]: I1205 20:02:20.810632 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6jzm\" (UniqueName: \"kubernetes.io/projected/0511136b-8d75-446b-a636-c88924c77822-kube-api-access-p6jzm\") pod \"0511136b-8d75-446b-a636-c88924c77822\" (UID: \"0511136b-8d75-446b-a636-c88924c77822\") " Dec 05 20:02:20 crc kubenswrapper[4828]: I1205 20:02:20.812951 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0511136b-8d75-446b-a636-c88924c77822-utilities" (OuterVolumeSpecName: "utilities") pod "0511136b-8d75-446b-a636-c88924c77822" (UID: "0511136b-8d75-446b-a636-c88924c77822"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 20:02:20 crc kubenswrapper[4828]: I1205 20:02:20.817166 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0511136b-8d75-446b-a636-c88924c77822-kube-api-access-p6jzm" (OuterVolumeSpecName: "kube-api-access-p6jzm") pod "0511136b-8d75-446b-a636-c88924c77822" (UID: "0511136b-8d75-446b-a636-c88924c77822"). InnerVolumeSpecName "kube-api-access-p6jzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 20:02:20 crc kubenswrapper[4828]: I1205 20:02:20.859680 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0511136b-8d75-446b-a636-c88924c77822-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0511136b-8d75-446b-a636-c88924c77822" (UID: "0511136b-8d75-446b-a636-c88924c77822"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 20:02:20 crc kubenswrapper[4828]: I1205 20:02:20.913227 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6jzm\" (UniqueName: \"kubernetes.io/projected/0511136b-8d75-446b-a636-c88924c77822-kube-api-access-p6jzm\") on node \"crc\" DevicePath \"\"" Dec 05 20:02:20 crc kubenswrapper[4828]: I1205 20:02:20.913282 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0511136b-8d75-446b-a636-c88924c77822-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 20:02:20 crc kubenswrapper[4828]: I1205 20:02:20.913291 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0511136b-8d75-446b-a636-c88924c77822-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 20:02:21 crc kubenswrapper[4828]: I1205 20:02:21.099805 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7c62q" event={"ID":"3eb764b0-a39d-4cf6-9600-4eadc5da383d","Type":"ContainerStarted","Data":"d2e809e152b2196e15f7b7fc12df9bf53e48b473b1d8614bf4335d07ed6a0846"} Dec 05 20:02:21 crc kubenswrapper[4828]: I1205 20:02:21.103653 4828 generic.go:334] "Generic (PLEG): container finished" podID="0511136b-8d75-446b-a636-c88924c77822" containerID="dd5ad025aa83f8dbbdec6649ab9047aaabdc0a08120c8493651b40f26444babb" exitCode=0 Dec 05 20:02:21 crc kubenswrapper[4828]: I1205 20:02:21.103687 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9kh2s" Dec 05 20:02:21 crc kubenswrapper[4828]: I1205 20:02:21.103709 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9kh2s" event={"ID":"0511136b-8d75-446b-a636-c88924c77822","Type":"ContainerDied","Data":"dd5ad025aa83f8dbbdec6649ab9047aaabdc0a08120c8493651b40f26444babb"} Dec 05 20:02:21 crc kubenswrapper[4828]: I1205 20:02:21.103783 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9kh2s" event={"ID":"0511136b-8d75-446b-a636-c88924c77822","Type":"ContainerDied","Data":"548f163329f594bf9171f0a664ab3b8775c930dd126510979154bc6b1af47017"} Dec 05 20:02:21 crc kubenswrapper[4828]: I1205 20:02:21.103807 4828 scope.go:117] "RemoveContainer" containerID="dd5ad025aa83f8dbbdec6649ab9047aaabdc0a08120c8493651b40f26444babb" Dec 05 20:02:21 crc kubenswrapper[4828]: I1205 20:02:21.127193 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7c62q" podStartSLOduration=2.719759329 podStartE2EDuration="4.127174198s" podCreationTimestamp="2025-12-05 20:02:17 +0000 UTC" firstStartedPulling="2025-12-05 20:02:19.074959661 +0000 UTC m=+3516.970181967" lastFinishedPulling="2025-12-05 20:02:20.48237453 +0000 UTC m=+3518.377596836" observedRunningTime="2025-12-05 20:02:21.124491095 +0000 UTC m=+3519.019713411" watchObservedRunningTime="2025-12-05 20:02:21.127174198 +0000 UTC m=+3519.022396494" Dec 05 20:02:21 crc kubenswrapper[4828]: I1205 20:02:21.143161 4828 scope.go:117] "RemoveContainer" containerID="455d67f5274324180f76079bd74e7a8bbd3fccdb8ec8a7e93a9767313dcb5d9e" Dec 05 20:02:21 crc kubenswrapper[4828]: I1205 20:02:21.153689 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9kh2s"] Dec 05 20:02:21 crc kubenswrapper[4828]: I1205 20:02:21.164320 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9kh2s"] Dec 05 20:02:21 crc kubenswrapper[4828]: I1205 20:02:21.181812 4828 scope.go:117] "RemoveContainer" containerID="cc0d51aaece86405ecdf3c725a40b3df451984a8c4116d1c1746e781bfcf97c2" Dec 05 20:02:21 crc kubenswrapper[4828]: I1205 20:02:21.230576 4828 scope.go:117] "RemoveContainer" containerID="dd5ad025aa83f8dbbdec6649ab9047aaabdc0a08120c8493651b40f26444babb" Dec 05 20:02:21 crc kubenswrapper[4828]: E1205 20:02:21.231213 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd5ad025aa83f8dbbdec6649ab9047aaabdc0a08120c8493651b40f26444babb\": container with ID starting with dd5ad025aa83f8dbbdec6649ab9047aaabdc0a08120c8493651b40f26444babb not found: ID does not exist" containerID="dd5ad025aa83f8dbbdec6649ab9047aaabdc0a08120c8493651b40f26444babb" Dec 05 20:02:21 crc kubenswrapper[4828]: I1205 20:02:21.231263 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd5ad025aa83f8dbbdec6649ab9047aaabdc0a08120c8493651b40f26444babb"} err="failed to get container status \"dd5ad025aa83f8dbbdec6649ab9047aaabdc0a08120c8493651b40f26444babb\": rpc error: code = NotFound desc = could not find container \"dd5ad025aa83f8dbbdec6649ab9047aaabdc0a08120c8493651b40f26444babb\": container with ID starting with dd5ad025aa83f8dbbdec6649ab9047aaabdc0a08120c8493651b40f26444babb not found: ID does not exist" Dec 05 20:02:21 crc kubenswrapper[4828]: I1205 20:02:21.231309 4828 scope.go:117] "RemoveContainer" containerID="455d67f5274324180f76079bd74e7a8bbd3fccdb8ec8a7e93a9767313dcb5d9e" Dec 05 20:02:21 crc kubenswrapper[4828]: E1205 20:02:21.231760 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"455d67f5274324180f76079bd74e7a8bbd3fccdb8ec8a7e93a9767313dcb5d9e\": container with ID starting with 455d67f5274324180f76079bd74e7a8bbd3fccdb8ec8a7e93a9767313dcb5d9e not found: ID does not exist" containerID="455d67f5274324180f76079bd74e7a8bbd3fccdb8ec8a7e93a9767313dcb5d9e" Dec 05 20:02:21 crc kubenswrapper[4828]: I1205 20:02:21.231778 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"455d67f5274324180f76079bd74e7a8bbd3fccdb8ec8a7e93a9767313dcb5d9e"} err="failed to get container status \"455d67f5274324180f76079bd74e7a8bbd3fccdb8ec8a7e93a9767313dcb5d9e\": rpc error: code = NotFound desc = could not find container \"455d67f5274324180f76079bd74e7a8bbd3fccdb8ec8a7e93a9767313dcb5d9e\": container with ID starting with 455d67f5274324180f76079bd74e7a8bbd3fccdb8ec8a7e93a9767313dcb5d9e not found: ID does not exist" Dec 05 20:02:21 crc kubenswrapper[4828]: I1205 20:02:21.231791 4828 scope.go:117] "RemoveContainer" containerID="cc0d51aaece86405ecdf3c725a40b3df451984a8c4116d1c1746e781bfcf97c2" Dec 05 20:02:21 crc kubenswrapper[4828]: E1205 20:02:21.232835 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc0d51aaece86405ecdf3c725a40b3df451984a8c4116d1c1746e781bfcf97c2\": container with ID starting with cc0d51aaece86405ecdf3c725a40b3df451984a8c4116d1c1746e781bfcf97c2 not found: ID does not exist" containerID="cc0d51aaece86405ecdf3c725a40b3df451984a8c4116d1c1746e781bfcf97c2" Dec 05 20:02:21 crc kubenswrapper[4828]: I1205 20:02:21.232874 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc0d51aaece86405ecdf3c725a40b3df451984a8c4116d1c1746e781bfcf97c2"} err="failed to get container status \"cc0d51aaece86405ecdf3c725a40b3df451984a8c4116d1c1746e781bfcf97c2\": rpc error: code = NotFound desc = could not find container \"cc0d51aaece86405ecdf3c725a40b3df451984a8c4116d1c1746e781bfcf97c2\": container with ID starting with cc0d51aaece86405ecdf3c725a40b3df451984a8c4116d1c1746e781bfcf97c2 not found: ID does not exist" Dec 05 20:02:21 crc kubenswrapper[4828]: I1205 20:02:21.467460 4828 scope.go:117] "RemoveContainer" containerID="77ff2ec7eccf11eb6122ebe5e1a7ae6109a17dd78f66c6d40ffcb95bfd9d37f9" Dec 05 20:02:21 crc kubenswrapper[4828]: E1205 20:02:21.467840 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:02:22 crc kubenswrapper[4828]: I1205 20:02:22.460069 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0511136b-8d75-446b-a636-c88924c77822" path="/var/lib/kubelet/pods/0511136b-8d75-446b-a636-c88924c77822/volumes" Dec 05 20:02:27 crc kubenswrapper[4828]: I1205 20:02:27.735180 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7c62q" Dec 05 20:02:27 crc kubenswrapper[4828]: I1205 20:02:27.735710 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7c62q" Dec 05 20:02:27 crc kubenswrapper[4828]: I1205 20:02:27.793419 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7c62q" Dec 05 20:02:28 crc kubenswrapper[4828]: I1205 20:02:28.216094 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7c62q" Dec 05 20:02:28 crc kubenswrapper[4828]: I1205 20:02:28.725625 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7c62q"] Dec 05 20:02:30 crc kubenswrapper[4828]: I1205 20:02:30.189763 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7c62q" podUID="3eb764b0-a39d-4cf6-9600-4eadc5da383d" containerName="registry-server" containerID="cri-o://d2e809e152b2196e15f7b7fc12df9bf53e48b473b1d8614bf4335d07ed6a0846" gracePeriod=2 Dec 05 20:02:31 crc kubenswrapper[4828]: I1205 20:02:31.200394 4828 generic.go:334] "Generic (PLEG): container finished" podID="3eb764b0-a39d-4cf6-9600-4eadc5da383d" containerID="d2e809e152b2196e15f7b7fc12df9bf53e48b473b1d8614bf4335d07ed6a0846" exitCode=0 Dec 05 20:02:31 crc kubenswrapper[4828]: I1205 20:02:31.200499 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7c62q" event={"ID":"3eb764b0-a39d-4cf6-9600-4eadc5da383d","Type":"ContainerDied","Data":"d2e809e152b2196e15f7b7fc12df9bf53e48b473b1d8614bf4335d07ed6a0846"} Dec 05 20:02:31 crc kubenswrapper[4828]: I1205 20:02:31.200667 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7c62q" event={"ID":"3eb764b0-a39d-4cf6-9600-4eadc5da383d","Type":"ContainerDied","Data":"406bb293ab28e823c6066c2bf950aa39525d99a8a744d7fe748bf3d2185511c9"} Dec 05 20:02:31 crc kubenswrapper[4828]: I1205 20:02:31.200683 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="406bb293ab28e823c6066c2bf950aa39525d99a8a744d7fe748bf3d2185511c9" Dec 05 20:02:31 crc kubenswrapper[4828]: I1205 20:02:31.275777 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7c62q" Dec 05 20:02:31 crc kubenswrapper[4828]: I1205 20:02:31.396699 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eb764b0-a39d-4cf6-9600-4eadc5da383d-utilities\") pod \"3eb764b0-a39d-4cf6-9600-4eadc5da383d\" (UID: \"3eb764b0-a39d-4cf6-9600-4eadc5da383d\") " Dec 05 20:02:31 crc kubenswrapper[4828]: I1205 20:02:31.396784 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eb764b0-a39d-4cf6-9600-4eadc5da383d-catalog-content\") pod \"3eb764b0-a39d-4cf6-9600-4eadc5da383d\" (UID: \"3eb764b0-a39d-4cf6-9600-4eadc5da383d\") " Dec 05 20:02:31 crc kubenswrapper[4828]: I1205 20:02:31.397008 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rshf\" (UniqueName: \"kubernetes.io/projected/3eb764b0-a39d-4cf6-9600-4eadc5da383d-kube-api-access-9rshf\") pod \"3eb764b0-a39d-4cf6-9600-4eadc5da383d\" (UID: \"3eb764b0-a39d-4cf6-9600-4eadc5da383d\") " Dec 05 20:02:31 crc kubenswrapper[4828]: I1205 20:02:31.397806 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3eb764b0-a39d-4cf6-9600-4eadc5da383d-utilities" (OuterVolumeSpecName: "utilities") pod "3eb764b0-a39d-4cf6-9600-4eadc5da383d" (UID: "3eb764b0-a39d-4cf6-9600-4eadc5da383d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 20:02:31 crc kubenswrapper[4828]: I1205 20:02:31.402953 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eb764b0-a39d-4cf6-9600-4eadc5da383d-kube-api-access-9rshf" (OuterVolumeSpecName: "kube-api-access-9rshf") pod "3eb764b0-a39d-4cf6-9600-4eadc5da383d" (UID: "3eb764b0-a39d-4cf6-9600-4eadc5da383d"). InnerVolumeSpecName "kube-api-access-9rshf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 20:02:31 crc kubenswrapper[4828]: I1205 20:02:31.499425 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3eb764b0-a39d-4cf6-9600-4eadc5da383d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3eb764b0-a39d-4cf6-9600-4eadc5da383d" (UID: "3eb764b0-a39d-4cf6-9600-4eadc5da383d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 20:02:31 crc kubenswrapper[4828]: I1205 20:02:31.512242 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rshf\" (UniqueName: \"kubernetes.io/projected/3eb764b0-a39d-4cf6-9600-4eadc5da383d-kube-api-access-9rshf\") on node \"crc\" DevicePath \"\"" Dec 05 20:02:31 crc kubenswrapper[4828]: I1205 20:02:31.512289 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eb764b0-a39d-4cf6-9600-4eadc5da383d-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 20:02:31 crc kubenswrapper[4828]: I1205 20:02:31.512302 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eb764b0-a39d-4cf6-9600-4eadc5da383d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 20:02:32 crc kubenswrapper[4828]: I1205 20:02:32.213110 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7c62q" Dec 05 20:02:32 crc kubenswrapper[4828]: I1205 20:02:32.264220 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7c62q"] Dec 05 20:02:32 crc kubenswrapper[4828]: I1205 20:02:32.276526 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7c62q"] Dec 05 20:02:32 crc kubenswrapper[4828]: I1205 20:02:32.460316 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eb764b0-a39d-4cf6-9600-4eadc5da383d" path="/var/lib/kubelet/pods/3eb764b0-a39d-4cf6-9600-4eadc5da383d/volumes" Dec 05 20:02:34 crc kubenswrapper[4828]: I1205 20:02:34.447123 4828 scope.go:117] "RemoveContainer" containerID="77ff2ec7eccf11eb6122ebe5e1a7ae6109a17dd78f66c6d40ffcb95bfd9d37f9" Dec 05 20:02:34 crc kubenswrapper[4828]: E1205 20:02:34.447748 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:02:45 crc kubenswrapper[4828]: I1205 20:02:45.446725 4828 scope.go:117] "RemoveContainer" containerID="77ff2ec7eccf11eb6122ebe5e1a7ae6109a17dd78f66c6d40ffcb95bfd9d37f9" Dec 05 20:02:45 crc kubenswrapper[4828]: E1205 20:02:45.447564 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:03:00 crc kubenswrapper[4828]: I1205 20:03:00.446660 4828 scope.go:117] "RemoveContainer" containerID="77ff2ec7eccf11eb6122ebe5e1a7ae6109a17dd78f66c6d40ffcb95bfd9d37f9" Dec 05 20:03:00 crc kubenswrapper[4828]: E1205 20:03:00.447388 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:03:05 crc kubenswrapper[4828]: I1205 20:03:05.259807 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 20:03:05 crc kubenswrapper[4828]: I1205 20:03:05.260502 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 20:03:11 crc kubenswrapper[4828]: I1205 20:03:11.447111 4828 scope.go:117] "RemoveContainer" containerID="77ff2ec7eccf11eb6122ebe5e1a7ae6109a17dd78f66c6d40ffcb95bfd9d37f9" Dec 05 20:03:11 crc kubenswrapper[4828]: E1205 20:03:11.448035 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:03:23 crc kubenswrapper[4828]: I1205 20:03:23.447336 4828 scope.go:117] "RemoveContainer" containerID="77ff2ec7eccf11eb6122ebe5e1a7ae6109a17dd78f66c6d40ffcb95bfd9d37f9" Dec 05 20:03:23 crc kubenswrapper[4828]: E1205 20:03:23.448930 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:03:35 crc kubenswrapper[4828]: I1205 20:03:35.260141 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 20:03:35 crc kubenswrapper[4828]: I1205 20:03:35.260566 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 20:03:36 crc kubenswrapper[4828]: I1205 20:03:36.446351 4828 scope.go:117] "RemoveContainer" containerID="77ff2ec7eccf11eb6122ebe5e1a7ae6109a17dd78f66c6d40ffcb95bfd9d37f9" Dec 05 20:03:36 crc kubenswrapper[4828]: E1205 20:03:36.446653 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:03:47 crc kubenswrapper[4828]: I1205 20:03:47.447013 4828 scope.go:117] "RemoveContainer" containerID="77ff2ec7eccf11eb6122ebe5e1a7ae6109a17dd78f66c6d40ffcb95bfd9d37f9" Dec 05 20:03:47 crc kubenswrapper[4828]: E1205 20:03:47.447983 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:03:59 crc kubenswrapper[4828]: I1205 20:03:59.447303 4828 scope.go:117] "RemoveContainer" containerID="77ff2ec7eccf11eb6122ebe5e1a7ae6109a17dd78f66c6d40ffcb95bfd9d37f9" Dec 05 20:03:59 crc kubenswrapper[4828]: E1205 20:03:59.448566 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:04:05 crc kubenswrapper[4828]: I1205 20:04:05.259320 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 20:04:05 crc kubenswrapper[4828]: I1205 20:04:05.259855 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 20:04:05 crc kubenswrapper[4828]: I1205 20:04:05.259900 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" Dec 05 20:04:05 crc kubenswrapper[4828]: I1205 20:04:05.260639 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f"} pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 20:04:05 crc kubenswrapper[4828]: I1205 20:04:05.260726 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" containerID="cri-o://da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f" gracePeriod=600 Dec 05 20:04:05 crc kubenswrapper[4828]: E1205 20:04:05.418891 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:04:06 crc kubenswrapper[4828]: I1205 20:04:06.197163 4828 generic.go:334] "Generic (PLEG): container finished" podID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerID="da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f" exitCode=0 Dec 05 20:04:06 crc kubenswrapper[4828]: I1205 20:04:06.197207 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerDied","Data":"da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f"} Dec 05 20:04:06 crc kubenswrapper[4828]: I1205 20:04:06.197240 4828 scope.go:117] "RemoveContainer" containerID="15759220b60d7ed708ce1d46839af98b1a9ed090b4ca259095bfd29b16663c22" Dec 05 20:04:06 crc kubenswrapper[4828]: I1205 20:04:06.197976 4828 scope.go:117] "RemoveContainer" containerID="da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f" Dec 05 20:04:06 crc kubenswrapper[4828]: E1205 20:04:06.198347 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:04:12 crc kubenswrapper[4828]: I1205 20:04:12.457594 4828 scope.go:117] "RemoveContainer" containerID="77ff2ec7eccf11eb6122ebe5e1a7ae6109a17dd78f66c6d40ffcb95bfd9d37f9" Dec 05 20:04:12 crc kubenswrapper[4828]: E1205 20:04:12.458573 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:04:19 crc kubenswrapper[4828]: I1205 20:04:19.446346 4828 scope.go:117] "RemoveContainer" containerID="da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f" Dec 05 20:04:19 crc kubenswrapper[4828]: E1205 20:04:19.447045 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:04:26 crc kubenswrapper[4828]: I1205 20:04:26.446488 4828 scope.go:117] "RemoveContainer" containerID="77ff2ec7eccf11eb6122ebe5e1a7ae6109a17dd78f66c6d40ffcb95bfd9d37f9" Dec 05 20:04:26 crc kubenswrapper[4828]: E1205 20:04:26.447191 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:04:31 crc kubenswrapper[4828]: I1205 20:04:31.447435 4828 scope.go:117] "RemoveContainer" containerID="da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f" Dec 05 20:04:31 crc kubenswrapper[4828]: E1205 20:04:31.448359 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:04:38 crc kubenswrapper[4828]: I1205 20:04:38.446718 4828 scope.go:117] "RemoveContainer" containerID="77ff2ec7eccf11eb6122ebe5e1a7ae6109a17dd78f66c6d40ffcb95bfd9d37f9" Dec 05 20:04:38 crc kubenswrapper[4828]: E1205 20:04:38.448976 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:04:45 crc kubenswrapper[4828]: I1205 20:04:45.446623 4828 scope.go:117] "RemoveContainer" containerID="da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f" Dec 05 20:04:45 crc kubenswrapper[4828]: E1205 20:04:45.447380 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:04:52 crc kubenswrapper[4828]: I1205 20:04:52.452011 4828 scope.go:117] "RemoveContainer" containerID="77ff2ec7eccf11eb6122ebe5e1a7ae6109a17dd78f66c6d40ffcb95bfd9d37f9" Dec 05 20:04:52 crc kubenswrapper[4828]: I1205 20:04:52.681488 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" event={"ID":"03c4fc5d-6be1-47b4-9c39-7bb86046dafd","Type":"ContainerStarted","Data":"49b68706ff6cb133e98a8e73e383231ef31d28f2048427edba9ba321e824e434"} Dec 05 20:04:52 crc kubenswrapper[4828]: I1205 20:04:52.682129 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 20:04:59 crc kubenswrapper[4828]: I1205 20:04:59.447099 4828 scope.go:117] "RemoveContainer" containerID="da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f" Dec 05 20:04:59 crc kubenswrapper[4828]: E1205 20:04:59.448240 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:05:05 crc kubenswrapper[4828]: I1205 20:05:05.125890 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 20:05:06 crc kubenswrapper[4828]: I1205 20:05:06.800793 4828 generic.go:334] "Generic (PLEG): container finished" podID="9d71b946-ed36-403c-9faf-feb03f741474" containerID="0b263ad96f0dffce32f22dd587fdff65d725d6e8242d39dd02f9eab7f01fd73f" exitCode=0 Dec 05 20:05:06 crc kubenswrapper[4828]: I1205 20:05:06.800925 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9d71b946-ed36-403c-9faf-feb03f741474","Type":"ContainerDied","Data":"0b263ad96f0dffce32f22dd587fdff65d725d6e8242d39dd02f9eab7f01fd73f"} Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.191973 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.312786 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9d71b946-ed36-403c-9faf-feb03f741474-ca-certs\") pod \"9d71b946-ed36-403c-9faf-feb03f741474\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.312839 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9d71b946-ed36-403c-9faf-feb03f741474-ssh-key\") pod \"9d71b946-ed36-403c-9faf-feb03f741474\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.312913 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9d71b946-ed36-403c-9faf-feb03f741474-openstack-config-secret\") pod \"9d71b946-ed36-403c-9faf-feb03f741474\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.312957 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spl4f\" (UniqueName: \"kubernetes.io/projected/9d71b946-ed36-403c-9faf-feb03f741474-kube-api-access-spl4f\") pod \"9d71b946-ed36-403c-9faf-feb03f741474\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.312978 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9d71b946-ed36-403c-9faf-feb03f741474-openstack-config\") pod \"9d71b946-ed36-403c-9faf-feb03f741474\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.313019 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9d71b946-ed36-403c-9faf-feb03f741474-test-operator-ephemeral-temporary\") pod \"9d71b946-ed36-403c-9faf-feb03f741474\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.313656 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9d71b946-ed36-403c-9faf-feb03f741474-test-operator-ephemeral-workdir\") pod \"9d71b946-ed36-403c-9faf-feb03f741474\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.313696 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d71b946-ed36-403c-9faf-feb03f741474-config-data\") pod \"9d71b946-ed36-403c-9faf-feb03f741474\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.313748 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d71b946-ed36-403c-9faf-feb03f741474-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "9d71b946-ed36-403c-9faf-feb03f741474" (UID: "9d71b946-ed36-403c-9faf-feb03f741474"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.313875 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"9d71b946-ed36-403c-9faf-feb03f741474\" (UID: \"9d71b946-ed36-403c-9faf-feb03f741474\") " Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.314430 4828 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9d71b946-ed36-403c-9faf-feb03f741474-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.314449 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d71b946-ed36-403c-9faf-feb03f741474-config-data" (OuterVolumeSpecName: "config-data") pod "9d71b946-ed36-403c-9faf-feb03f741474" (UID: "9d71b946-ed36-403c-9faf-feb03f741474"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.324128 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d71b946-ed36-403c-9faf-feb03f741474-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "9d71b946-ed36-403c-9faf-feb03f741474" (UID: "9d71b946-ed36-403c-9faf-feb03f741474"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.325142 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "test-operator-logs") pod "9d71b946-ed36-403c-9faf-feb03f741474" (UID: "9d71b946-ed36-403c-9faf-feb03f741474"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.325751 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d71b946-ed36-403c-9faf-feb03f741474-kube-api-access-spl4f" (OuterVolumeSpecName: "kube-api-access-spl4f") pod "9d71b946-ed36-403c-9faf-feb03f741474" (UID: "9d71b946-ed36-403c-9faf-feb03f741474"). InnerVolumeSpecName "kube-api-access-spl4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.342286 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d71b946-ed36-403c-9faf-feb03f741474-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "9d71b946-ed36-403c-9faf-feb03f741474" (UID: "9d71b946-ed36-403c-9faf-feb03f741474"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.359443 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d71b946-ed36-403c-9faf-feb03f741474-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "9d71b946-ed36-403c-9faf-feb03f741474" (UID: "9d71b946-ed36-403c-9faf-feb03f741474"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.363703 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d71b946-ed36-403c-9faf-feb03f741474-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "9d71b946-ed36-403c-9faf-feb03f741474" (UID: "9d71b946-ed36-403c-9faf-feb03f741474"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.364623 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d71b946-ed36-403c-9faf-feb03f741474-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9d71b946-ed36-403c-9faf-feb03f741474" (UID: "9d71b946-ed36-403c-9faf-feb03f741474"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.416041 4828 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9d71b946-ed36-403c-9faf-feb03f741474-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.416379 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spl4f\" (UniqueName: \"kubernetes.io/projected/9d71b946-ed36-403c-9faf-feb03f741474-kube-api-access-spl4f\") on node \"crc\" DevicePath \"\"" Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.416393 4828 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9d71b946-ed36-403c-9faf-feb03f741474-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.416406 4828 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9d71b946-ed36-403c-9faf-feb03f741474-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.416420 4828 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d71b946-ed36-403c-9faf-feb03f741474-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.416492 4828 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.416507 4828 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9d71b946-ed36-403c-9faf-feb03f741474-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.416519 4828 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9d71b946-ed36-403c-9faf-feb03f741474-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.439450 4828 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.518498 4828 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.824790 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9d71b946-ed36-403c-9faf-feb03f741474","Type":"ContainerDied","Data":"8df6794f19db3051c4553a581432707d3696f3c6cae911c876c4daba23bf52cb"} Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.824859 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 05 20:05:08 crc kubenswrapper[4828]: I1205 20:05:08.824867 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8df6794f19db3051c4553a581432707d3696f3c6cae911c876c4daba23bf52cb" Dec 05 20:05:10 crc kubenswrapper[4828]: I1205 20:05:10.446760 4828 scope.go:117] "RemoveContainer" containerID="da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f" Dec 05 20:05:10 crc kubenswrapper[4828]: E1205 20:05:10.447077 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:05:12 crc kubenswrapper[4828]: I1205 20:05:12.528540 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 05 20:05:12 crc kubenswrapper[4828]: E1205 20:05:12.529719 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0511136b-8d75-446b-a636-c88924c77822" containerName="registry-server" Dec 05 20:05:12 crc kubenswrapper[4828]: I1205 20:05:12.529733 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="0511136b-8d75-446b-a636-c88924c77822" containerName="registry-server" Dec 05 20:05:12 crc kubenswrapper[4828]: E1205 20:05:12.529751 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb764b0-a39d-4cf6-9600-4eadc5da383d" containerName="registry-server" Dec 05 20:05:12 crc kubenswrapper[4828]: I1205 20:05:12.529757 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb764b0-a39d-4cf6-9600-4eadc5da383d" containerName="registry-server" Dec 05 20:05:12 crc kubenswrapper[4828]: E1205 20:05:12.529769 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0511136b-8d75-446b-a636-c88924c77822" containerName="extract-utilities" Dec 05 20:05:12 crc kubenswrapper[4828]: I1205 20:05:12.529775 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="0511136b-8d75-446b-a636-c88924c77822" containerName="extract-utilities" Dec 05 20:05:12 crc kubenswrapper[4828]: E1205 20:05:12.529785 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb764b0-a39d-4cf6-9600-4eadc5da383d" containerName="extract-content" Dec 05 20:05:12 crc kubenswrapper[4828]: I1205 20:05:12.529791 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb764b0-a39d-4cf6-9600-4eadc5da383d" containerName="extract-content" Dec 05 20:05:12 crc kubenswrapper[4828]: E1205 20:05:12.529799 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb764b0-a39d-4cf6-9600-4eadc5da383d" containerName="extract-utilities" Dec 05 20:05:12 crc kubenswrapper[4828]: I1205 20:05:12.529804 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb764b0-a39d-4cf6-9600-4eadc5da383d" containerName="extract-utilities" Dec 05 20:05:12 crc kubenswrapper[4828]: E1205 20:05:12.529815 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0511136b-8d75-446b-a636-c88924c77822" containerName="extract-content" Dec 05 20:05:12 crc kubenswrapper[4828]: I1205 20:05:12.529820 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="0511136b-8d75-446b-a636-c88924c77822" containerName="extract-content" Dec 05 20:05:12 crc kubenswrapper[4828]: E1205 20:05:12.529861 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d71b946-ed36-403c-9faf-feb03f741474" containerName="tempest-tests-tempest-tests-runner" Dec 05 20:05:12 crc kubenswrapper[4828]: I1205 20:05:12.529867 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d71b946-ed36-403c-9faf-feb03f741474" containerName="tempest-tests-tempest-tests-runner" Dec 05 20:05:12 crc kubenswrapper[4828]: I1205 20:05:12.530039 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb764b0-a39d-4cf6-9600-4eadc5da383d" containerName="registry-server" Dec 05 20:05:12 crc kubenswrapper[4828]: I1205 20:05:12.530060 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="0511136b-8d75-446b-a636-c88924c77822" containerName="registry-server" Dec 05 20:05:12 crc kubenswrapper[4828]: I1205 20:05:12.530075 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d71b946-ed36-403c-9faf-feb03f741474" containerName="tempest-tests-tempest-tests-runner" Dec 05 20:05:12 crc kubenswrapper[4828]: I1205 20:05:12.530724 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 20:05:12 crc kubenswrapper[4828]: I1205 20:05:12.533030 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-vlxgx" Dec 05 20:05:12 crc kubenswrapper[4828]: I1205 20:05:12.544495 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 05 20:05:12 crc kubenswrapper[4828]: I1205 20:05:12.701574 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8pck\" (UniqueName: \"kubernetes.io/projected/9fe13abd-7133-4370-a848-17cea54271e1-kube-api-access-s8pck\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9fe13abd-7133-4370-a848-17cea54271e1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 20:05:12 crc kubenswrapper[4828]: I1205 20:05:12.701641 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9fe13abd-7133-4370-a848-17cea54271e1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 20:05:12 crc kubenswrapper[4828]: I1205 20:05:12.803935 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8pck\" (UniqueName: \"kubernetes.io/projected/9fe13abd-7133-4370-a848-17cea54271e1-kube-api-access-s8pck\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9fe13abd-7133-4370-a848-17cea54271e1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 20:05:12 crc kubenswrapper[4828]: I1205 20:05:12.803988 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9fe13abd-7133-4370-a848-17cea54271e1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 20:05:12 crc kubenswrapper[4828]: I1205 20:05:12.804529 4828 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9fe13abd-7133-4370-a848-17cea54271e1\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 20:05:12 crc kubenswrapper[4828]: I1205 20:05:12.831045 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8pck\" (UniqueName: \"kubernetes.io/projected/9fe13abd-7133-4370-a848-17cea54271e1-kube-api-access-s8pck\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9fe13abd-7133-4370-a848-17cea54271e1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 20:05:12 crc kubenswrapper[4828]: I1205 20:05:12.846459 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9fe13abd-7133-4370-a848-17cea54271e1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 20:05:12 crc kubenswrapper[4828]: I1205 20:05:12.858662 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 20:05:13 crc kubenswrapper[4828]: I1205 20:05:13.289043 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 05 20:05:13 crc kubenswrapper[4828]: W1205 20:05:13.297782 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9fe13abd_7133_4370_a848_17cea54271e1.slice/crio-9d1e8af7372d1f5549f8ffb587893ea00003a95aa31d57ec504b580cb2deef29 WatchSource:0}: Error finding container 9d1e8af7372d1f5549f8ffb587893ea00003a95aa31d57ec504b580cb2deef29: Status 404 returned error can't find the container with id 9d1e8af7372d1f5549f8ffb587893ea00003a95aa31d57ec504b580cb2deef29 Dec 05 20:05:13 crc kubenswrapper[4828]: I1205 20:05:13.884974 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"9fe13abd-7133-4370-a848-17cea54271e1","Type":"ContainerStarted","Data":"9d1e8af7372d1f5549f8ffb587893ea00003a95aa31d57ec504b580cb2deef29"} Dec 05 20:05:14 crc kubenswrapper[4828]: I1205 20:05:14.895371 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"9fe13abd-7133-4370-a848-17cea54271e1","Type":"ContainerStarted","Data":"290ec36c05f8a7d05289c101d321c75eb053a48c0d022b9db3ec8540c0ebe207"} Dec 05 20:05:14 crc kubenswrapper[4828]: I1205 20:05:14.913721 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.083983554 podStartE2EDuration="2.913701958s" podCreationTimestamp="2025-12-05 20:05:12 +0000 UTC" firstStartedPulling="2025-12-05 20:05:13.300220084 +0000 UTC m=+3691.195442390" lastFinishedPulling="2025-12-05 20:05:14.129938468 +0000 UTC m=+3692.025160794" observedRunningTime="2025-12-05 20:05:14.906581156 +0000 UTC m=+3692.801803462" watchObservedRunningTime="2025-12-05 20:05:14.913701958 +0000 UTC m=+3692.808924254" Dec 05 20:05:24 crc kubenswrapper[4828]: I1205 20:05:24.446907 4828 scope.go:117] "RemoveContainer" containerID="da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f" Dec 05 20:05:24 crc kubenswrapper[4828]: E1205 20:05:24.447789 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:05:37 crc kubenswrapper[4828]: I1205 20:05:37.446591 4828 scope.go:117] "RemoveContainer" containerID="da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f" Dec 05 20:05:37 crc kubenswrapper[4828]: E1205 20:05:37.448502 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:05:37 crc kubenswrapper[4828]: I1205 20:05:37.928730 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4mfks/must-gather-tdlzv"] Dec 05 20:05:37 crc kubenswrapper[4828]: I1205 20:05:37.930770 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4mfks/must-gather-tdlzv" Dec 05 20:05:37 crc kubenswrapper[4828]: I1205 20:05:37.932764 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-4mfks"/"kube-root-ca.crt" Dec 05 20:05:37 crc kubenswrapper[4828]: I1205 20:05:37.932908 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-4mfks"/"default-dockercfg-wtr2r" Dec 05 20:05:37 crc kubenswrapper[4828]: I1205 20:05:37.934275 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-4mfks"/"openshift-service-ca.crt" Dec 05 20:05:37 crc kubenswrapper[4828]: I1205 20:05:37.939679 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-4mfks/must-gather-tdlzv"] Dec 05 20:05:38 crc kubenswrapper[4828]: I1205 20:05:38.030181 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bd62c8e0-605f-4e05-89a2-042ba85ba53d-must-gather-output\") pod \"must-gather-tdlzv\" (UID: \"bd62c8e0-605f-4e05-89a2-042ba85ba53d\") " pod="openshift-must-gather-4mfks/must-gather-tdlzv" Dec 05 20:05:38 crc kubenswrapper[4828]: I1205 20:05:38.030430 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhgns\" (UniqueName: \"kubernetes.io/projected/bd62c8e0-605f-4e05-89a2-042ba85ba53d-kube-api-access-nhgns\") pod \"must-gather-tdlzv\" (UID: \"bd62c8e0-605f-4e05-89a2-042ba85ba53d\") " pod="openshift-must-gather-4mfks/must-gather-tdlzv" Dec 05 20:05:38 crc kubenswrapper[4828]: I1205 20:05:38.133188 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bd62c8e0-605f-4e05-89a2-042ba85ba53d-must-gather-output\") pod \"must-gather-tdlzv\" (UID: \"bd62c8e0-605f-4e05-89a2-042ba85ba53d\") " pod="openshift-must-gather-4mfks/must-gather-tdlzv" Dec 05 20:05:38 crc kubenswrapper[4828]: I1205 20:05:38.133381 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhgns\" (UniqueName: \"kubernetes.io/projected/bd62c8e0-605f-4e05-89a2-042ba85ba53d-kube-api-access-nhgns\") pod \"must-gather-tdlzv\" (UID: \"bd62c8e0-605f-4e05-89a2-042ba85ba53d\") " pod="openshift-must-gather-4mfks/must-gather-tdlzv" Dec 05 20:05:38 crc kubenswrapper[4828]: I1205 20:05:38.133598 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bd62c8e0-605f-4e05-89a2-042ba85ba53d-must-gather-output\") pod \"must-gather-tdlzv\" (UID: \"bd62c8e0-605f-4e05-89a2-042ba85ba53d\") " pod="openshift-must-gather-4mfks/must-gather-tdlzv" Dec 05 20:05:38 crc kubenswrapper[4828]: I1205 20:05:38.151288 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhgns\" (UniqueName: \"kubernetes.io/projected/bd62c8e0-605f-4e05-89a2-042ba85ba53d-kube-api-access-nhgns\") pod \"must-gather-tdlzv\" (UID: \"bd62c8e0-605f-4e05-89a2-042ba85ba53d\") " pod="openshift-must-gather-4mfks/must-gather-tdlzv" Dec 05 20:05:38 crc kubenswrapper[4828]: I1205 20:05:38.268036 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4mfks/must-gather-tdlzv" Dec 05 20:05:38 crc kubenswrapper[4828]: I1205 20:05:38.768056 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-4mfks/must-gather-tdlzv"] Dec 05 20:05:39 crc kubenswrapper[4828]: I1205 20:05:39.152209 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4mfks/must-gather-tdlzv" event={"ID":"bd62c8e0-605f-4e05-89a2-042ba85ba53d","Type":"ContainerStarted","Data":"1a23e46b9a0d1653fe4e01e9d9d59e85d1204a2fb1fb511400c09a3a5f78f42d"} Dec 05 20:05:43 crc kubenswrapper[4828]: I1205 20:05:43.190729 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4mfks/must-gather-tdlzv" event={"ID":"bd62c8e0-605f-4e05-89a2-042ba85ba53d","Type":"ContainerStarted","Data":"8284089aebfb9042cf6aae51a8f81b800e28d2df860d04d0cb5651e3ae200c13"} Dec 05 20:05:43 crc kubenswrapper[4828]: I1205 20:05:43.191273 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4mfks/must-gather-tdlzv" event={"ID":"bd62c8e0-605f-4e05-89a2-042ba85ba53d","Type":"ContainerStarted","Data":"6af36ae01c9a75df0559af7f0d56ac6eb776a1a0e61db8773d285760a4f21c6c"} Dec 05 20:05:43 crc kubenswrapper[4828]: I1205 20:05:43.215771 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-4mfks/must-gather-tdlzv" podStartSLOduration=2.582600815 podStartE2EDuration="6.215750642s" podCreationTimestamp="2025-12-05 20:05:37 +0000 UTC" firstStartedPulling="2025-12-05 20:05:38.773645085 +0000 UTC m=+3716.668867431" lastFinishedPulling="2025-12-05 20:05:42.406794952 +0000 UTC m=+3720.302017258" observedRunningTime="2025-12-05 20:05:43.207994043 +0000 UTC m=+3721.103216389" watchObservedRunningTime="2025-12-05 20:05:43.215750642 +0000 UTC m=+3721.110972968" Dec 05 20:05:46 crc kubenswrapper[4828]: I1205 20:05:46.182325 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4mfks/crc-debug-qdnbf"] Dec 05 20:05:46 crc kubenswrapper[4828]: I1205 20:05:46.183857 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4mfks/crc-debug-qdnbf" Dec 05 20:05:46 crc kubenswrapper[4828]: I1205 20:05:46.228964 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7a920da8-5748-49ce-aa1b-39fbf85ad2b6-host\") pod \"crc-debug-qdnbf\" (UID: \"7a920da8-5748-49ce-aa1b-39fbf85ad2b6\") " pod="openshift-must-gather-4mfks/crc-debug-qdnbf" Dec 05 20:05:46 crc kubenswrapper[4828]: I1205 20:05:46.229327 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxcl7\" (UniqueName: \"kubernetes.io/projected/7a920da8-5748-49ce-aa1b-39fbf85ad2b6-kube-api-access-kxcl7\") pod \"crc-debug-qdnbf\" (UID: \"7a920da8-5748-49ce-aa1b-39fbf85ad2b6\") " pod="openshift-must-gather-4mfks/crc-debug-qdnbf" Dec 05 20:05:46 crc kubenswrapper[4828]: I1205 20:05:46.330839 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7a920da8-5748-49ce-aa1b-39fbf85ad2b6-host\") pod \"crc-debug-qdnbf\" (UID: \"7a920da8-5748-49ce-aa1b-39fbf85ad2b6\") " pod="openshift-must-gather-4mfks/crc-debug-qdnbf" Dec 05 20:05:46 crc kubenswrapper[4828]: I1205 20:05:46.330910 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxcl7\" (UniqueName: \"kubernetes.io/projected/7a920da8-5748-49ce-aa1b-39fbf85ad2b6-kube-api-access-kxcl7\") pod \"crc-debug-qdnbf\" (UID: \"7a920da8-5748-49ce-aa1b-39fbf85ad2b6\") " pod="openshift-must-gather-4mfks/crc-debug-qdnbf" Dec 05 20:05:46 crc kubenswrapper[4828]: I1205 20:05:46.330919 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7a920da8-5748-49ce-aa1b-39fbf85ad2b6-host\") pod \"crc-debug-qdnbf\" (UID: \"7a920da8-5748-49ce-aa1b-39fbf85ad2b6\") " pod="openshift-must-gather-4mfks/crc-debug-qdnbf" Dec 05 20:05:46 crc kubenswrapper[4828]: I1205 20:05:46.356400 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxcl7\" (UniqueName: \"kubernetes.io/projected/7a920da8-5748-49ce-aa1b-39fbf85ad2b6-kube-api-access-kxcl7\") pod \"crc-debug-qdnbf\" (UID: \"7a920da8-5748-49ce-aa1b-39fbf85ad2b6\") " pod="openshift-must-gather-4mfks/crc-debug-qdnbf" Dec 05 20:05:46 crc kubenswrapper[4828]: I1205 20:05:46.502695 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4mfks/crc-debug-qdnbf" Dec 05 20:05:47 crc kubenswrapper[4828]: I1205 20:05:47.242485 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4mfks/crc-debug-qdnbf" event={"ID":"7a920da8-5748-49ce-aa1b-39fbf85ad2b6","Type":"ContainerStarted","Data":"b08297b6bf6320a79932d9622eb3ea6db1657dc7f984bdb9514d20f9bcebc6f7"} Dec 05 20:05:49 crc kubenswrapper[4828]: I1205 20:05:49.446320 4828 scope.go:117] "RemoveContainer" containerID="da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f" Dec 05 20:05:49 crc kubenswrapper[4828]: E1205 20:05:49.446840 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:06:00 crc kubenswrapper[4828]: I1205 20:06:00.443254 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4mfks/crc-debug-qdnbf" event={"ID":"7a920da8-5748-49ce-aa1b-39fbf85ad2b6","Type":"ContainerStarted","Data":"0747acf2c1cef61a7b196aa06b8f9a5047367b0aa10b3279125cd25003bcb9e9"} Dec 05 20:06:04 crc kubenswrapper[4828]: I1205 20:06:04.446231 4828 scope.go:117] "RemoveContainer" containerID="da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f" Dec 05 20:06:04 crc kubenswrapper[4828]: E1205 20:06:04.446969 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:06:16 crc kubenswrapper[4828]: I1205 20:06:16.446446 4828 scope.go:117] "RemoveContainer" containerID="da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f" Dec 05 20:06:16 crc kubenswrapper[4828]: E1205 20:06:16.447278 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:06:28 crc kubenswrapper[4828]: I1205 20:06:28.451014 4828 scope.go:117] "RemoveContainer" containerID="da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f" Dec 05 20:06:28 crc kubenswrapper[4828]: E1205 20:06:28.451622 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:06:39 crc kubenswrapper[4828]: I1205 20:06:39.258250 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-4mfks/crc-debug-qdnbf" podStartSLOduration=39.908400876 podStartE2EDuration="53.258224357s" podCreationTimestamp="2025-12-05 20:05:46 +0000 UTC" firstStartedPulling="2025-12-05 20:05:46.540136781 +0000 UTC m=+3724.435359087" lastFinishedPulling="2025-12-05 20:05:59.889960262 +0000 UTC m=+3737.785182568" observedRunningTime="2025-12-05 20:06:00.475955935 +0000 UTC m=+3738.371178261" watchObservedRunningTime="2025-12-05 20:06:39.258224357 +0000 UTC m=+3777.153446683" Dec 05 20:06:39 crc kubenswrapper[4828]: I1205 20:06:39.262772 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dzmqf"] Dec 05 20:06:39 crc kubenswrapper[4828]: I1205 20:06:39.266008 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dzmqf" Dec 05 20:06:39 crc kubenswrapper[4828]: I1205 20:06:39.274005 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dzmqf"] Dec 05 20:06:39 crc kubenswrapper[4828]: I1205 20:06:39.420014 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/859af17e-b617-46f1-96a9-e819f88c632f-catalog-content\") pod \"redhat-operators-dzmqf\" (UID: \"859af17e-b617-46f1-96a9-e819f88c632f\") " pod="openshift-marketplace/redhat-operators-dzmqf" Dec 05 20:06:39 crc kubenswrapper[4828]: I1205 20:06:39.420862 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmk98\" (UniqueName: \"kubernetes.io/projected/859af17e-b617-46f1-96a9-e819f88c632f-kube-api-access-rmk98\") pod \"redhat-operators-dzmqf\" (UID: \"859af17e-b617-46f1-96a9-e819f88c632f\") " pod="openshift-marketplace/redhat-operators-dzmqf" Dec 05 20:06:39 crc kubenswrapper[4828]: I1205 20:06:39.420925 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/859af17e-b617-46f1-96a9-e819f88c632f-utilities\") pod \"redhat-operators-dzmqf\" (UID: \"859af17e-b617-46f1-96a9-e819f88c632f\") " pod="openshift-marketplace/redhat-operators-dzmqf" Dec 05 20:06:39 crc kubenswrapper[4828]: I1205 20:06:39.446201 4828 scope.go:117] "RemoveContainer" containerID="da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f" Dec 05 20:06:39 crc kubenswrapper[4828]: E1205 20:06:39.446744 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:06:39 crc kubenswrapper[4828]: I1205 20:06:39.522108 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/859af17e-b617-46f1-96a9-e819f88c632f-catalog-content\") pod \"redhat-operators-dzmqf\" (UID: \"859af17e-b617-46f1-96a9-e819f88c632f\") " pod="openshift-marketplace/redhat-operators-dzmqf" Dec 05 20:06:39 crc kubenswrapper[4828]: I1205 20:06:39.522224 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmk98\" (UniqueName: \"kubernetes.io/projected/859af17e-b617-46f1-96a9-e819f88c632f-kube-api-access-rmk98\") pod \"redhat-operators-dzmqf\" (UID: \"859af17e-b617-46f1-96a9-e819f88c632f\") " pod="openshift-marketplace/redhat-operators-dzmqf" Dec 05 20:06:39 crc kubenswrapper[4828]: I1205 20:06:39.522273 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/859af17e-b617-46f1-96a9-e819f88c632f-utilities\") pod \"redhat-operators-dzmqf\" (UID: \"859af17e-b617-46f1-96a9-e819f88c632f\") " pod="openshift-marketplace/redhat-operators-dzmqf" Dec 05 20:06:39 crc kubenswrapper[4828]: I1205 20:06:39.522588 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/859af17e-b617-46f1-96a9-e819f88c632f-catalog-content\") pod \"redhat-operators-dzmqf\" (UID: \"859af17e-b617-46f1-96a9-e819f88c632f\") " pod="openshift-marketplace/redhat-operators-dzmqf" Dec 05 20:06:39 crc kubenswrapper[4828]: I1205 20:06:39.522668 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/859af17e-b617-46f1-96a9-e819f88c632f-utilities\") pod \"redhat-operators-dzmqf\" (UID: \"859af17e-b617-46f1-96a9-e819f88c632f\") " pod="openshift-marketplace/redhat-operators-dzmqf" Dec 05 20:06:39 crc kubenswrapper[4828]: I1205 20:06:39.541714 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmk98\" (UniqueName: \"kubernetes.io/projected/859af17e-b617-46f1-96a9-e819f88c632f-kube-api-access-rmk98\") pod \"redhat-operators-dzmqf\" (UID: \"859af17e-b617-46f1-96a9-e819f88c632f\") " pod="openshift-marketplace/redhat-operators-dzmqf" Dec 05 20:06:39 crc kubenswrapper[4828]: I1205 20:06:39.593725 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dzmqf" Dec 05 20:06:40 crc kubenswrapper[4828]: I1205 20:06:40.144381 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dzmqf"] Dec 05 20:06:40 crc kubenswrapper[4828]: I1205 20:06:40.996492 4828 generic.go:334] "Generic (PLEG): container finished" podID="859af17e-b617-46f1-96a9-e819f88c632f" containerID="4c02cd7417522766eb4d42b652a19bc0adeccc1ef3169e9fec548b45fcae42ee" exitCode=0 Dec 05 20:06:40 crc kubenswrapper[4828]: I1205 20:06:40.996753 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzmqf" event={"ID":"859af17e-b617-46f1-96a9-e819f88c632f","Type":"ContainerDied","Data":"4c02cd7417522766eb4d42b652a19bc0adeccc1ef3169e9fec548b45fcae42ee"} Dec 05 20:06:40 crc kubenswrapper[4828]: I1205 20:06:40.996812 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzmqf" event={"ID":"859af17e-b617-46f1-96a9-e819f88c632f","Type":"ContainerStarted","Data":"29f78f3938b11965008bc42dfdb9ea1d5a6539ec44b23aee8aa3ef4fc72a613b"} Dec 05 20:06:42 crc kubenswrapper[4828]: I1205 20:06:42.010570 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzmqf" event={"ID":"859af17e-b617-46f1-96a9-e819f88c632f","Type":"ContainerStarted","Data":"c8890aba8f6a68149d1e0fde082797329a8e4efc357158202aff327a0f8cce9f"} Dec 05 20:06:43 crc kubenswrapper[4828]: I1205 20:06:43.026119 4828 generic.go:334] "Generic (PLEG): container finished" podID="859af17e-b617-46f1-96a9-e819f88c632f" containerID="c8890aba8f6a68149d1e0fde082797329a8e4efc357158202aff327a0f8cce9f" exitCode=0 Dec 05 20:06:43 crc kubenswrapper[4828]: I1205 20:06:43.026184 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzmqf" event={"ID":"859af17e-b617-46f1-96a9-e819f88c632f","Type":"ContainerDied","Data":"c8890aba8f6a68149d1e0fde082797329a8e4efc357158202aff327a0f8cce9f"} Dec 05 20:06:43 crc kubenswrapper[4828]: I1205 20:06:43.029683 4828 generic.go:334] "Generic (PLEG): container finished" podID="7a920da8-5748-49ce-aa1b-39fbf85ad2b6" containerID="0747acf2c1cef61a7b196aa06b8f9a5047367b0aa10b3279125cd25003bcb9e9" exitCode=0 Dec 05 20:06:43 crc kubenswrapper[4828]: I1205 20:06:43.029716 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4mfks/crc-debug-qdnbf" event={"ID":"7a920da8-5748-49ce-aa1b-39fbf85ad2b6","Type":"ContainerDied","Data":"0747acf2c1cef61a7b196aa06b8f9a5047367b0aa10b3279125cd25003bcb9e9"} Dec 05 20:06:44 crc kubenswrapper[4828]: I1205 20:06:44.045012 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzmqf" event={"ID":"859af17e-b617-46f1-96a9-e819f88c632f","Type":"ContainerStarted","Data":"6cbda3b4c8968a564457d6add2a50ec7cabe470beaeec42cfe4528d0e4bbdde3"} Dec 05 20:06:44 crc kubenswrapper[4828]: I1205 20:06:44.077252 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dzmqf" podStartSLOduration=2.475217176 podStartE2EDuration="5.077231195s" podCreationTimestamp="2025-12-05 20:06:39 +0000 UTC" firstStartedPulling="2025-12-05 20:06:40.998360744 +0000 UTC m=+3778.893583050" lastFinishedPulling="2025-12-05 20:06:43.600374763 +0000 UTC m=+3781.495597069" observedRunningTime="2025-12-05 20:06:44.067459431 +0000 UTC m=+3781.962681757" watchObservedRunningTime="2025-12-05 20:06:44.077231195 +0000 UTC m=+3781.972453511" Dec 05 20:06:44 crc kubenswrapper[4828]: I1205 20:06:44.146957 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4mfks/crc-debug-qdnbf" Dec 05 20:06:44 crc kubenswrapper[4828]: I1205 20:06:44.189738 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4mfks/crc-debug-qdnbf"] Dec 05 20:06:44 crc kubenswrapper[4828]: I1205 20:06:44.200649 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4mfks/crc-debug-qdnbf"] Dec 05 20:06:44 crc kubenswrapper[4828]: I1205 20:06:44.322487 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7a920da8-5748-49ce-aa1b-39fbf85ad2b6-host\") pod \"7a920da8-5748-49ce-aa1b-39fbf85ad2b6\" (UID: \"7a920da8-5748-49ce-aa1b-39fbf85ad2b6\") " Dec 05 20:06:44 crc kubenswrapper[4828]: I1205 20:06:44.322649 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a920da8-5748-49ce-aa1b-39fbf85ad2b6-host" (OuterVolumeSpecName: "host") pod "7a920da8-5748-49ce-aa1b-39fbf85ad2b6" (UID: "7a920da8-5748-49ce-aa1b-39fbf85ad2b6"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 20:06:44 crc kubenswrapper[4828]: I1205 20:06:44.322746 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxcl7\" (UniqueName: \"kubernetes.io/projected/7a920da8-5748-49ce-aa1b-39fbf85ad2b6-kube-api-access-kxcl7\") pod \"7a920da8-5748-49ce-aa1b-39fbf85ad2b6\" (UID: \"7a920da8-5748-49ce-aa1b-39fbf85ad2b6\") " Dec 05 20:06:44 crc kubenswrapper[4828]: I1205 20:06:44.323288 4828 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7a920da8-5748-49ce-aa1b-39fbf85ad2b6-host\") on node \"crc\" DevicePath \"\"" Dec 05 20:06:44 crc kubenswrapper[4828]: I1205 20:06:44.328403 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a920da8-5748-49ce-aa1b-39fbf85ad2b6-kube-api-access-kxcl7" (OuterVolumeSpecName: "kube-api-access-kxcl7") pod "7a920da8-5748-49ce-aa1b-39fbf85ad2b6" (UID: "7a920da8-5748-49ce-aa1b-39fbf85ad2b6"). InnerVolumeSpecName "kube-api-access-kxcl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 20:06:44 crc kubenswrapper[4828]: I1205 20:06:44.428420 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxcl7\" (UniqueName: \"kubernetes.io/projected/7a920da8-5748-49ce-aa1b-39fbf85ad2b6-kube-api-access-kxcl7\") on node \"crc\" DevicePath \"\"" Dec 05 20:06:44 crc kubenswrapper[4828]: I1205 20:06:44.463406 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a920da8-5748-49ce-aa1b-39fbf85ad2b6" path="/var/lib/kubelet/pods/7a920da8-5748-49ce-aa1b-39fbf85ad2b6/volumes" Dec 05 20:06:45 crc kubenswrapper[4828]: I1205 20:06:45.057012 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4mfks/crc-debug-qdnbf" Dec 05 20:06:45 crc kubenswrapper[4828]: I1205 20:06:45.057022 4828 scope.go:117] "RemoveContainer" containerID="0747acf2c1cef61a7b196aa06b8f9a5047367b0aa10b3279125cd25003bcb9e9" Dec 05 20:06:45 crc kubenswrapper[4828]: I1205 20:06:45.383835 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4mfks/crc-debug-vp47z"] Dec 05 20:06:45 crc kubenswrapper[4828]: E1205 20:06:45.384205 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a920da8-5748-49ce-aa1b-39fbf85ad2b6" containerName="container-00" Dec 05 20:06:45 crc kubenswrapper[4828]: I1205 20:06:45.384220 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a920da8-5748-49ce-aa1b-39fbf85ad2b6" containerName="container-00" Dec 05 20:06:45 crc kubenswrapper[4828]: I1205 20:06:45.384403 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a920da8-5748-49ce-aa1b-39fbf85ad2b6" containerName="container-00" Dec 05 20:06:45 crc kubenswrapper[4828]: I1205 20:06:45.385026 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4mfks/crc-debug-vp47z" Dec 05 20:06:45 crc kubenswrapper[4828]: I1205 20:06:45.583165 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcd46\" (UniqueName: \"kubernetes.io/projected/52bc3cb5-0e44-4099-bb94-2c31e9197529-kube-api-access-wcd46\") pod \"crc-debug-vp47z\" (UID: \"52bc3cb5-0e44-4099-bb94-2c31e9197529\") " pod="openshift-must-gather-4mfks/crc-debug-vp47z" Dec 05 20:06:45 crc kubenswrapper[4828]: I1205 20:06:45.583351 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/52bc3cb5-0e44-4099-bb94-2c31e9197529-host\") pod \"crc-debug-vp47z\" (UID: \"52bc3cb5-0e44-4099-bb94-2c31e9197529\") " pod="openshift-must-gather-4mfks/crc-debug-vp47z" Dec 05 20:06:45 crc kubenswrapper[4828]: I1205 20:06:45.684721 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcd46\" (UniqueName: \"kubernetes.io/projected/52bc3cb5-0e44-4099-bb94-2c31e9197529-kube-api-access-wcd46\") pod \"crc-debug-vp47z\" (UID: \"52bc3cb5-0e44-4099-bb94-2c31e9197529\") " pod="openshift-must-gather-4mfks/crc-debug-vp47z" Dec 05 20:06:45 crc kubenswrapper[4828]: I1205 20:06:45.684853 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/52bc3cb5-0e44-4099-bb94-2c31e9197529-host\") pod \"crc-debug-vp47z\" (UID: \"52bc3cb5-0e44-4099-bb94-2c31e9197529\") " pod="openshift-must-gather-4mfks/crc-debug-vp47z" Dec 05 20:06:45 crc kubenswrapper[4828]: I1205 20:06:45.684970 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/52bc3cb5-0e44-4099-bb94-2c31e9197529-host\") pod \"crc-debug-vp47z\" (UID: \"52bc3cb5-0e44-4099-bb94-2c31e9197529\") " pod="openshift-must-gather-4mfks/crc-debug-vp47z" Dec 05 20:06:45 crc kubenswrapper[4828]: I1205 20:06:45.708503 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcd46\" (UniqueName: \"kubernetes.io/projected/52bc3cb5-0e44-4099-bb94-2c31e9197529-kube-api-access-wcd46\") pod \"crc-debug-vp47z\" (UID: \"52bc3cb5-0e44-4099-bb94-2c31e9197529\") " pod="openshift-must-gather-4mfks/crc-debug-vp47z" Dec 05 20:06:46 crc kubenswrapper[4828]: I1205 20:06:46.001035 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4mfks/crc-debug-vp47z" Dec 05 20:06:46 crc kubenswrapper[4828]: I1205 20:06:46.072638 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4mfks/crc-debug-vp47z" event={"ID":"52bc3cb5-0e44-4099-bb94-2c31e9197529","Type":"ContainerStarted","Data":"b0c8ba9ad72f08fe6b2c6600c87a25cc38a0735a01c82c50053b4e5efce5c47c"} Dec 05 20:06:47 crc kubenswrapper[4828]: I1205 20:06:47.087282 4828 generic.go:334] "Generic (PLEG): container finished" podID="52bc3cb5-0e44-4099-bb94-2c31e9197529" containerID="22461acb56facb6aad61bc3319eee7f35445182a17d0ae33c50c81a951b67b8b" exitCode=0 Dec 05 20:06:47 crc kubenswrapper[4828]: I1205 20:06:47.087341 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4mfks/crc-debug-vp47z" event={"ID":"52bc3cb5-0e44-4099-bb94-2c31e9197529","Type":"ContainerDied","Data":"22461acb56facb6aad61bc3319eee7f35445182a17d0ae33c50c81a951b67b8b"} Dec 05 20:06:47 crc kubenswrapper[4828]: I1205 20:06:47.569974 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4mfks/crc-debug-vp47z"] Dec 05 20:06:47 crc kubenswrapper[4828]: I1205 20:06:47.576965 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4mfks/crc-debug-vp47z"] Dec 05 20:06:48 crc kubenswrapper[4828]: I1205 20:06:48.202927 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4mfks/crc-debug-vp47z" Dec 05 20:06:48 crc kubenswrapper[4828]: I1205 20:06:48.339294 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcd46\" (UniqueName: \"kubernetes.io/projected/52bc3cb5-0e44-4099-bb94-2c31e9197529-kube-api-access-wcd46\") pod \"52bc3cb5-0e44-4099-bb94-2c31e9197529\" (UID: \"52bc3cb5-0e44-4099-bb94-2c31e9197529\") " Dec 05 20:06:48 crc kubenswrapper[4828]: I1205 20:06:48.340135 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/52bc3cb5-0e44-4099-bb94-2c31e9197529-host\") pod \"52bc3cb5-0e44-4099-bb94-2c31e9197529\" (UID: \"52bc3cb5-0e44-4099-bb94-2c31e9197529\") " Dec 05 20:06:48 crc kubenswrapper[4828]: I1205 20:06:48.340324 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52bc3cb5-0e44-4099-bb94-2c31e9197529-host" (OuterVolumeSpecName: "host") pod "52bc3cb5-0e44-4099-bb94-2c31e9197529" (UID: "52bc3cb5-0e44-4099-bb94-2c31e9197529"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 20:06:48 crc kubenswrapper[4828]: I1205 20:06:48.341432 4828 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/52bc3cb5-0e44-4099-bb94-2c31e9197529-host\") on node \"crc\" DevicePath \"\"" Dec 05 20:06:48 crc kubenswrapper[4828]: I1205 20:06:48.344916 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52bc3cb5-0e44-4099-bb94-2c31e9197529-kube-api-access-wcd46" (OuterVolumeSpecName: "kube-api-access-wcd46") pod "52bc3cb5-0e44-4099-bb94-2c31e9197529" (UID: "52bc3cb5-0e44-4099-bb94-2c31e9197529"). InnerVolumeSpecName "kube-api-access-wcd46". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 20:06:48 crc kubenswrapper[4828]: I1205 20:06:48.443400 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcd46\" (UniqueName: \"kubernetes.io/projected/52bc3cb5-0e44-4099-bb94-2c31e9197529-kube-api-access-wcd46\") on node \"crc\" DevicePath \"\"" Dec 05 20:06:48 crc kubenswrapper[4828]: I1205 20:06:48.461422 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52bc3cb5-0e44-4099-bb94-2c31e9197529" path="/var/lib/kubelet/pods/52bc3cb5-0e44-4099-bb94-2c31e9197529/volumes" Dec 05 20:06:48 crc kubenswrapper[4828]: I1205 20:06:48.776057 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4mfks/crc-debug-vd8td"] Dec 05 20:06:48 crc kubenswrapper[4828]: E1205 20:06:48.776673 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52bc3cb5-0e44-4099-bb94-2c31e9197529" containerName="container-00" Dec 05 20:06:48 crc kubenswrapper[4828]: I1205 20:06:48.776688 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="52bc3cb5-0e44-4099-bb94-2c31e9197529" containerName="container-00" Dec 05 20:06:48 crc kubenswrapper[4828]: I1205 20:06:48.776910 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="52bc3cb5-0e44-4099-bb94-2c31e9197529" containerName="container-00" Dec 05 20:06:48 crc kubenswrapper[4828]: I1205 20:06:48.777506 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4mfks/crc-debug-vd8td" Dec 05 20:06:48 crc kubenswrapper[4828]: I1205 20:06:48.848749 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xh4r\" (UniqueName: \"kubernetes.io/projected/1583d0bb-137e-4198-9cb3-2ff53e72cad5-kube-api-access-2xh4r\") pod \"crc-debug-vd8td\" (UID: \"1583d0bb-137e-4198-9cb3-2ff53e72cad5\") " pod="openshift-must-gather-4mfks/crc-debug-vd8td" Dec 05 20:06:48 crc kubenswrapper[4828]: I1205 20:06:48.848850 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1583d0bb-137e-4198-9cb3-2ff53e72cad5-host\") pod \"crc-debug-vd8td\" (UID: \"1583d0bb-137e-4198-9cb3-2ff53e72cad5\") " pod="openshift-must-gather-4mfks/crc-debug-vd8td" Dec 05 20:06:48 crc kubenswrapper[4828]: I1205 20:06:48.951065 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xh4r\" (UniqueName: \"kubernetes.io/projected/1583d0bb-137e-4198-9cb3-2ff53e72cad5-kube-api-access-2xh4r\") pod \"crc-debug-vd8td\" (UID: \"1583d0bb-137e-4198-9cb3-2ff53e72cad5\") " pod="openshift-must-gather-4mfks/crc-debug-vd8td" Dec 05 20:06:48 crc kubenswrapper[4828]: I1205 20:06:48.951155 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1583d0bb-137e-4198-9cb3-2ff53e72cad5-host\") pod \"crc-debug-vd8td\" (UID: \"1583d0bb-137e-4198-9cb3-2ff53e72cad5\") " pod="openshift-must-gather-4mfks/crc-debug-vd8td" Dec 05 20:06:48 crc kubenswrapper[4828]: I1205 20:06:48.951355 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1583d0bb-137e-4198-9cb3-2ff53e72cad5-host\") pod \"crc-debug-vd8td\" (UID: \"1583d0bb-137e-4198-9cb3-2ff53e72cad5\") " pod="openshift-must-gather-4mfks/crc-debug-vd8td" Dec 05 20:06:48 crc kubenswrapper[4828]: I1205 20:06:48.973272 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xh4r\" (UniqueName: \"kubernetes.io/projected/1583d0bb-137e-4198-9cb3-2ff53e72cad5-kube-api-access-2xh4r\") pod \"crc-debug-vd8td\" (UID: \"1583d0bb-137e-4198-9cb3-2ff53e72cad5\") " pod="openshift-must-gather-4mfks/crc-debug-vd8td" Dec 05 20:06:49 crc kubenswrapper[4828]: I1205 20:06:49.095010 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4mfks/crc-debug-vd8td" Dec 05 20:06:49 crc kubenswrapper[4828]: I1205 20:06:49.114494 4828 scope.go:117] "RemoveContainer" containerID="22461acb56facb6aad61bc3319eee7f35445182a17d0ae33c50c81a951b67b8b" Dec 05 20:06:49 crc kubenswrapper[4828]: I1205 20:06:49.114535 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4mfks/crc-debug-vp47z" Dec 05 20:06:49 crc kubenswrapper[4828]: I1205 20:06:49.594477 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dzmqf" Dec 05 20:06:49 crc kubenswrapper[4828]: I1205 20:06:49.594879 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dzmqf" Dec 05 20:06:49 crc kubenswrapper[4828]: I1205 20:06:49.700254 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dzmqf" Dec 05 20:06:50 crc kubenswrapper[4828]: I1205 20:06:50.127123 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4mfks/crc-debug-vd8td" event={"ID":"1583d0bb-137e-4198-9cb3-2ff53e72cad5","Type":"ContainerStarted","Data":"807c0f65b67361f6a76f559ea471a2b9824ef38fa2e469cfaaccf96ca6219604"} Dec 05 20:06:50 crc kubenswrapper[4828]: I1205 20:06:50.213273 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dzmqf" Dec 05 20:06:50 crc kubenswrapper[4828]: I1205 20:06:50.270251 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dzmqf"] Dec 05 20:06:51 crc kubenswrapper[4828]: I1205 20:06:51.144514 4828 generic.go:334] "Generic (PLEG): container finished" podID="1583d0bb-137e-4198-9cb3-2ff53e72cad5" containerID="3cb7bde59d7082b5fad55984535cdbcedbb6ade62e2e4344cd3dd0a3f26c0c1e" exitCode=0 Dec 05 20:06:51 crc kubenswrapper[4828]: I1205 20:06:51.145022 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4mfks/crc-debug-vd8td" event={"ID":"1583d0bb-137e-4198-9cb3-2ff53e72cad5","Type":"ContainerDied","Data":"3cb7bde59d7082b5fad55984535cdbcedbb6ade62e2e4344cd3dd0a3f26c0c1e"} Dec 05 20:06:51 crc kubenswrapper[4828]: I1205 20:06:51.203475 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4mfks/crc-debug-vd8td"] Dec 05 20:06:51 crc kubenswrapper[4828]: I1205 20:06:51.220663 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4mfks/crc-debug-vd8td"] Dec 05 20:06:51 crc kubenswrapper[4828]: I1205 20:06:51.447303 4828 scope.go:117] "RemoveContainer" containerID="da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f" Dec 05 20:06:51 crc kubenswrapper[4828]: E1205 20:06:51.447642 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:06:52 crc kubenswrapper[4828]: I1205 20:06:52.152993 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dzmqf" podUID="859af17e-b617-46f1-96a9-e819f88c632f" containerName="registry-server" containerID="cri-o://6cbda3b4c8968a564457d6add2a50ec7cabe470beaeec42cfe4528d0e4bbdde3" gracePeriod=2 Dec 05 20:06:52 crc kubenswrapper[4828]: I1205 20:06:52.400985 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4mfks/crc-debug-vd8td" Dec 05 20:06:52 crc kubenswrapper[4828]: I1205 20:06:52.556013 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1583d0bb-137e-4198-9cb3-2ff53e72cad5-host\") pod \"1583d0bb-137e-4198-9cb3-2ff53e72cad5\" (UID: \"1583d0bb-137e-4198-9cb3-2ff53e72cad5\") " Dec 05 20:06:52 crc kubenswrapper[4828]: I1205 20:06:52.556342 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xh4r\" (UniqueName: \"kubernetes.io/projected/1583d0bb-137e-4198-9cb3-2ff53e72cad5-kube-api-access-2xh4r\") pod \"1583d0bb-137e-4198-9cb3-2ff53e72cad5\" (UID: \"1583d0bb-137e-4198-9cb3-2ff53e72cad5\") " Dec 05 20:06:52 crc kubenswrapper[4828]: I1205 20:06:52.556569 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1583d0bb-137e-4198-9cb3-2ff53e72cad5-host" (OuterVolumeSpecName: "host") pod "1583d0bb-137e-4198-9cb3-2ff53e72cad5" (UID: "1583d0bb-137e-4198-9cb3-2ff53e72cad5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 20:06:52 crc kubenswrapper[4828]: I1205 20:06:52.556787 4828 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1583d0bb-137e-4198-9cb3-2ff53e72cad5-host\") on node \"crc\" DevicePath \"\"" Dec 05 20:06:52 crc kubenswrapper[4828]: I1205 20:06:52.575054 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1583d0bb-137e-4198-9cb3-2ff53e72cad5-kube-api-access-2xh4r" (OuterVolumeSpecName: "kube-api-access-2xh4r") pod "1583d0bb-137e-4198-9cb3-2ff53e72cad5" (UID: "1583d0bb-137e-4198-9cb3-2ff53e72cad5"). InnerVolumeSpecName "kube-api-access-2xh4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 20:06:52 crc kubenswrapper[4828]: I1205 20:06:52.658130 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xh4r\" (UniqueName: \"kubernetes.io/projected/1583d0bb-137e-4198-9cb3-2ff53e72cad5-kube-api-access-2xh4r\") on node \"crc\" DevicePath \"\"" Dec 05 20:06:52 crc kubenswrapper[4828]: I1205 20:06:52.661780 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dzmqf" Dec 05 20:06:52 crc kubenswrapper[4828]: I1205 20:06:52.759390 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/859af17e-b617-46f1-96a9-e819f88c632f-utilities\") pod \"859af17e-b617-46f1-96a9-e819f88c632f\" (UID: \"859af17e-b617-46f1-96a9-e819f88c632f\") " Dec 05 20:06:52 crc kubenswrapper[4828]: I1205 20:06:52.759976 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/859af17e-b617-46f1-96a9-e819f88c632f-catalog-content\") pod \"859af17e-b617-46f1-96a9-e819f88c632f\" (UID: \"859af17e-b617-46f1-96a9-e819f88c632f\") " Dec 05 20:06:52 crc kubenswrapper[4828]: I1205 20:06:52.760145 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmk98\" (UniqueName: \"kubernetes.io/projected/859af17e-b617-46f1-96a9-e819f88c632f-kube-api-access-rmk98\") pod \"859af17e-b617-46f1-96a9-e819f88c632f\" (UID: \"859af17e-b617-46f1-96a9-e819f88c632f\") " Dec 05 20:06:52 crc kubenswrapper[4828]: I1205 20:06:52.760189 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/859af17e-b617-46f1-96a9-e819f88c632f-utilities" (OuterVolumeSpecName: "utilities") pod "859af17e-b617-46f1-96a9-e819f88c632f" (UID: "859af17e-b617-46f1-96a9-e819f88c632f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 20:06:52 crc kubenswrapper[4828]: I1205 20:06:52.763910 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/859af17e-b617-46f1-96a9-e819f88c632f-kube-api-access-rmk98" (OuterVolumeSpecName: "kube-api-access-rmk98") pod "859af17e-b617-46f1-96a9-e819f88c632f" (UID: "859af17e-b617-46f1-96a9-e819f88c632f"). InnerVolumeSpecName "kube-api-access-rmk98". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 20:06:52 crc kubenswrapper[4828]: I1205 20:06:52.861771 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmk98\" (UniqueName: \"kubernetes.io/projected/859af17e-b617-46f1-96a9-e819f88c632f-kube-api-access-rmk98\") on node \"crc\" DevicePath \"\"" Dec 05 20:06:52 crc kubenswrapper[4828]: I1205 20:06:52.861805 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/859af17e-b617-46f1-96a9-e819f88c632f-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 20:06:52 crc kubenswrapper[4828]: I1205 20:06:52.867461 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/859af17e-b617-46f1-96a9-e819f88c632f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "859af17e-b617-46f1-96a9-e819f88c632f" (UID: "859af17e-b617-46f1-96a9-e819f88c632f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 20:06:52 crc kubenswrapper[4828]: I1205 20:06:52.963051 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/859af17e-b617-46f1-96a9-e819f88c632f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 20:06:53 crc kubenswrapper[4828]: I1205 20:06:53.162260 4828 generic.go:334] "Generic (PLEG): container finished" podID="859af17e-b617-46f1-96a9-e819f88c632f" containerID="6cbda3b4c8968a564457d6add2a50ec7cabe470beaeec42cfe4528d0e4bbdde3" exitCode=0 Dec 05 20:06:53 crc kubenswrapper[4828]: I1205 20:06:53.162425 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzmqf" event={"ID":"859af17e-b617-46f1-96a9-e819f88c632f","Type":"ContainerDied","Data":"6cbda3b4c8968a564457d6add2a50ec7cabe470beaeec42cfe4528d0e4bbdde3"} Dec 05 20:06:53 crc kubenswrapper[4828]: I1205 20:06:53.162568 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzmqf" event={"ID":"859af17e-b617-46f1-96a9-e819f88c632f","Type":"ContainerDied","Data":"29f78f3938b11965008bc42dfdb9ea1d5a6539ec44b23aee8aa3ef4fc72a613b"} Dec 05 20:06:53 crc kubenswrapper[4828]: I1205 20:06:53.162590 4828 scope.go:117] "RemoveContainer" containerID="6cbda3b4c8968a564457d6add2a50ec7cabe470beaeec42cfe4528d0e4bbdde3" Dec 05 20:06:53 crc kubenswrapper[4828]: I1205 20:06:53.163511 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dzmqf" Dec 05 20:06:53 crc kubenswrapper[4828]: I1205 20:06:53.164559 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4mfks/crc-debug-vd8td" Dec 05 20:06:53 crc kubenswrapper[4828]: I1205 20:06:53.229043 4828 scope.go:117] "RemoveContainer" containerID="c8890aba8f6a68149d1e0fde082797329a8e4efc357158202aff327a0f8cce9f" Dec 05 20:06:53 crc kubenswrapper[4828]: I1205 20:06:53.231407 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dzmqf"] Dec 05 20:06:53 crc kubenswrapper[4828]: I1205 20:06:53.248094 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dzmqf"] Dec 05 20:06:53 crc kubenswrapper[4828]: I1205 20:06:53.258591 4828 scope.go:117] "RemoveContainer" containerID="4c02cd7417522766eb4d42b652a19bc0adeccc1ef3169e9fec548b45fcae42ee" Dec 05 20:06:53 crc kubenswrapper[4828]: I1205 20:06:53.304620 4828 scope.go:117] "RemoveContainer" containerID="6cbda3b4c8968a564457d6add2a50ec7cabe470beaeec42cfe4528d0e4bbdde3" Dec 05 20:06:53 crc kubenswrapper[4828]: E1205 20:06:53.305262 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cbda3b4c8968a564457d6add2a50ec7cabe470beaeec42cfe4528d0e4bbdde3\": container with ID starting with 6cbda3b4c8968a564457d6add2a50ec7cabe470beaeec42cfe4528d0e4bbdde3 not found: ID does not exist" containerID="6cbda3b4c8968a564457d6add2a50ec7cabe470beaeec42cfe4528d0e4bbdde3" Dec 05 20:06:53 crc kubenswrapper[4828]: I1205 20:06:53.305301 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cbda3b4c8968a564457d6add2a50ec7cabe470beaeec42cfe4528d0e4bbdde3"} err="failed to get container status \"6cbda3b4c8968a564457d6add2a50ec7cabe470beaeec42cfe4528d0e4bbdde3\": rpc error: code = NotFound desc = could not find container \"6cbda3b4c8968a564457d6add2a50ec7cabe470beaeec42cfe4528d0e4bbdde3\": container with ID starting with 6cbda3b4c8968a564457d6add2a50ec7cabe470beaeec42cfe4528d0e4bbdde3 not found: ID does not exist" Dec 05 20:06:53 crc kubenswrapper[4828]: I1205 20:06:53.305334 4828 scope.go:117] "RemoveContainer" containerID="c8890aba8f6a68149d1e0fde082797329a8e4efc357158202aff327a0f8cce9f" Dec 05 20:06:53 crc kubenswrapper[4828]: E1205 20:06:53.305794 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8890aba8f6a68149d1e0fde082797329a8e4efc357158202aff327a0f8cce9f\": container with ID starting with c8890aba8f6a68149d1e0fde082797329a8e4efc357158202aff327a0f8cce9f not found: ID does not exist" containerID="c8890aba8f6a68149d1e0fde082797329a8e4efc357158202aff327a0f8cce9f" Dec 05 20:06:53 crc kubenswrapper[4828]: I1205 20:06:53.305878 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8890aba8f6a68149d1e0fde082797329a8e4efc357158202aff327a0f8cce9f"} err="failed to get container status \"c8890aba8f6a68149d1e0fde082797329a8e4efc357158202aff327a0f8cce9f\": rpc error: code = NotFound desc = could not find container \"c8890aba8f6a68149d1e0fde082797329a8e4efc357158202aff327a0f8cce9f\": container with ID starting with c8890aba8f6a68149d1e0fde082797329a8e4efc357158202aff327a0f8cce9f not found: ID does not exist" Dec 05 20:06:53 crc kubenswrapper[4828]: I1205 20:06:53.305910 4828 scope.go:117] "RemoveContainer" containerID="4c02cd7417522766eb4d42b652a19bc0adeccc1ef3169e9fec548b45fcae42ee" Dec 05 20:06:53 crc kubenswrapper[4828]: E1205 20:06:53.306195 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c02cd7417522766eb4d42b652a19bc0adeccc1ef3169e9fec548b45fcae42ee\": container with ID starting with 4c02cd7417522766eb4d42b652a19bc0adeccc1ef3169e9fec548b45fcae42ee not found: ID does not exist" containerID="4c02cd7417522766eb4d42b652a19bc0adeccc1ef3169e9fec548b45fcae42ee" Dec 05 20:06:53 crc kubenswrapper[4828]: I1205 20:06:53.306228 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c02cd7417522766eb4d42b652a19bc0adeccc1ef3169e9fec548b45fcae42ee"} err="failed to get container status \"4c02cd7417522766eb4d42b652a19bc0adeccc1ef3169e9fec548b45fcae42ee\": rpc error: code = NotFound desc = could not find container \"4c02cd7417522766eb4d42b652a19bc0adeccc1ef3169e9fec548b45fcae42ee\": container with ID starting with 4c02cd7417522766eb4d42b652a19bc0adeccc1ef3169e9fec548b45fcae42ee not found: ID does not exist" Dec 05 20:06:53 crc kubenswrapper[4828]: I1205 20:06:53.306244 4828 scope.go:117] "RemoveContainer" containerID="3cb7bde59d7082b5fad55984535cdbcedbb6ade62e2e4344cd3dd0a3f26c0c1e" Dec 05 20:06:54 crc kubenswrapper[4828]: I1205 20:06:54.458848 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1583d0bb-137e-4198-9cb3-2ff53e72cad5" path="/var/lib/kubelet/pods/1583d0bb-137e-4198-9cb3-2ff53e72cad5/volumes" Dec 05 20:06:54 crc kubenswrapper[4828]: I1205 20:06:54.459624 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="859af17e-b617-46f1-96a9-e819f88c632f" path="/var/lib/kubelet/pods/859af17e-b617-46f1-96a9-e819f88c632f/volumes" Dec 05 20:07:05 crc kubenswrapper[4828]: I1205 20:07:05.446728 4828 scope.go:117] "RemoveContainer" containerID="da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f" Dec 05 20:07:05 crc kubenswrapper[4828]: E1205 20:07:05.447531 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:07:06 crc kubenswrapper[4828]: I1205 20:07:06.881310 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-8769b7dc8-87tcr_7b34386c-5f6a-420f-8889-5dd31e8560c0/barbican-api/0.log" Dec 05 20:07:06 crc kubenswrapper[4828]: I1205 20:07:06.981396 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-8769b7dc8-87tcr_7b34386c-5f6a-420f-8889-5dd31e8560c0/barbican-api-log/0.log" Dec 05 20:07:07 crc kubenswrapper[4828]: I1205 20:07:07.059068 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-c9967b7f4-tjx24_e1a17074-48bc-4f34-8a44-dd1321ff8fc1/barbican-keystone-listener/0.log" Dec 05 20:07:07 crc kubenswrapper[4828]: I1205 20:07:07.133997 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-c9967b7f4-tjx24_e1a17074-48bc-4f34-8a44-dd1321ff8fc1/barbican-keystone-listener-log/0.log" Dec 05 20:07:07 crc kubenswrapper[4828]: I1205 20:07:07.377828 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6d6b94f97f-2m6zj_ba560005-dff7-4d93-b2aa-58d922405ff3/barbican-worker-log/0.log" Dec 05 20:07:07 crc kubenswrapper[4828]: I1205 20:07:07.388733 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6d6b94f97f-2m6zj_ba560005-dff7-4d93-b2aa-58d922405ff3/barbican-worker/0.log" Dec 05 20:07:07 crc kubenswrapper[4828]: I1205 20:07:07.570007 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr_f959e321-6568-4dd3-8c87-0ebb49d9c517/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:07:07 crc kubenswrapper[4828]: I1205 20:07:07.623052 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1984b123-aa0e-4af4-a396-76c783a22b45/ceilometer-central-agent/0.log" Dec 05 20:07:07 crc kubenswrapper[4828]: I1205 20:07:07.734289 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1984b123-aa0e-4af4-a396-76c783a22b45/ceilometer-notification-agent/0.log" Dec 05 20:07:07 crc kubenswrapper[4828]: I1205 20:07:07.792460 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1984b123-aa0e-4af4-a396-76c783a22b45/proxy-httpd/0.log" Dec 05 20:07:07 crc kubenswrapper[4828]: I1205 20:07:07.836540 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1984b123-aa0e-4af4-a396-76c783a22b45/sg-core/0.log" Dec 05 20:07:07 crc kubenswrapper[4828]: I1205 20:07:07.981400 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_55a11269-8096-4009-a3b0-44f7d554fe4f/cinder-api/0.log" Dec 05 20:07:08 crc kubenswrapper[4828]: I1205 20:07:08.021842 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_55a11269-8096-4009-a3b0-44f7d554fe4f/cinder-api-log/0.log" Dec 05 20:07:08 crc kubenswrapper[4828]: I1205 20:07:08.158735 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_714c55c4-ac9a-4e63-8159-04f311676ad5/cinder-scheduler/0.log" Dec 05 20:07:08 crc kubenswrapper[4828]: I1205 20:07:08.224544 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_714c55c4-ac9a-4e63-8159-04f311676ad5/probe/0.log" Dec 05 20:07:08 crc kubenswrapper[4828]: I1205 20:07:08.334231 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz_814c8a59-108d-4ee6-943c-2f4e11294f14/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:07:08 crc kubenswrapper[4828]: I1205 20:07:08.422782 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f_3f1c3024-3679-435b-9252-3cd35ee43b4b/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:07:08 crc kubenswrapper[4828]: I1205 20:07:08.540163 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-w4hmm_f77804ae-0e68-40a3-bbd8-5dac2e64eedf/init/0.log" Dec 05 20:07:08 crc kubenswrapper[4828]: I1205 20:07:08.738959 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-w4hmm_f77804ae-0e68-40a3-bbd8-5dac2e64eedf/dnsmasq-dns/0.log" Dec 05 20:07:08 crc kubenswrapper[4828]: I1205 20:07:08.756602 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-w4hmm_f77804ae-0e68-40a3-bbd8-5dac2e64eedf/init/0.log" Dec 05 20:07:08 crc kubenswrapper[4828]: I1205 20:07:08.775902 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-dcrds_04bf9e49-2000-4a46-81a8-3dc1ef7c352f/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:07:08 crc kubenswrapper[4828]: I1205 20:07:08.981404 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8ee563d9-a334-428c-8d24-b0b1438e8ee8/glance-log/0.log" Dec 05 20:07:08 crc kubenswrapper[4828]: I1205 20:07:08.984528 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8ee563d9-a334-428c-8d24-b0b1438e8ee8/glance-httpd/0.log" Dec 05 20:07:09 crc kubenswrapper[4828]: I1205 20:07:09.126597 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_a7126f93-6b58-41ae-8f7a-b86281398e90/glance-httpd/0.log" Dec 05 20:07:09 crc kubenswrapper[4828]: I1205 20:07:09.212531 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_a7126f93-6b58-41ae-8f7a-b86281398e90/glance-log/0.log" Dec 05 20:07:09 crc kubenswrapper[4828]: I1205 20:07:09.313058 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-594b9fb44-r9zh6_99c01665-feb9-49f7-a97a-b6e6d87dc991/horizon/0.log" Dec 05 20:07:09 crc kubenswrapper[4828]: I1205 20:07:09.516567 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8_ab033631-5ea0-4fce-a4e3-3f0c390f07ac/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:07:09 crc kubenswrapper[4828]: I1205 20:07:09.687880 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-594b9fb44-r9zh6_99c01665-feb9-49f7-a97a-b6e6d87dc991/horizon-log/0.log" Dec 05 20:07:09 crc kubenswrapper[4828]: I1205 20:07:09.708993 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-bctgl_96658594-f9dc-4bc6-8d77-3db81db8d2fd/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:07:09 crc kubenswrapper[4828]: I1205 20:07:09.933075 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29416081-mbbpn_d30b1521-4341-40f3-8952-8e0d03fc192b/keystone-cron/0.log" Dec 05 20:07:09 crc kubenswrapper[4828]: I1205 20:07:09.950957 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-89dcf679f-97rfx_49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4/keystone-api/0.log" Dec 05 20:07:10 crc kubenswrapper[4828]: I1205 20:07:10.145325 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_d2de7f1c-8c50-41e1-be30-ce169c261e65/kube-state-metrics/0.log" Dec 05 20:07:10 crc kubenswrapper[4828]: I1205 20:07:10.180690 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5_b46bef7a-7a08-49f8-a4ff-d6fae6ac588e/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:07:10 crc kubenswrapper[4828]: I1205 20:07:10.516527 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6dc6b5c7cf-8mlhh_f336b9b2-7051-43e5-8b10-ad9cab15c947/neutron-api/0.log" Dec 05 20:07:10 crc kubenswrapper[4828]: I1205 20:07:10.642451 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6dc6b5c7cf-8mlhh_f336b9b2-7051-43e5-8b10-ad9cab15c947/neutron-httpd/0.log" Dec 05 20:07:10 crc kubenswrapper[4828]: I1205 20:07:10.803115 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2_02cb0b69-3011-491e-8081-0ee1a0053610/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:07:11 crc kubenswrapper[4828]: I1205 20:07:11.458770 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_6f14d5c3-1d7a-4efc-90f3-e97d2cb4098d/nova-cell0-conductor-conductor/0.log" Dec 05 20:07:11 crc kubenswrapper[4828]: I1205 20:07:11.466455 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e8beb365-61f2-42bf-be67-af226900e81c/nova-api-log/0.log" Dec 05 20:07:11 crc kubenswrapper[4828]: I1205 20:07:11.510404 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e8beb365-61f2-42bf-be67-af226900e81c/nova-api-api/0.log" Dec 05 20:07:11 crc kubenswrapper[4828]: I1205 20:07:11.797447 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_ddf78da4-c3d6-41b2-b8e1-803e3f075586/nova-cell1-novncproxy-novncproxy/0.log" Dec 05 20:07:11 crc kubenswrapper[4828]: I1205 20:07:11.814667 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_5f2a95e9-2c8f-4c04-b9f8-546e8a09aa7b/nova-cell1-conductor-conductor/0.log" Dec 05 20:07:11 crc kubenswrapper[4828]: I1205 20:07:11.976083 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-chk7b_b730436a-244c-4d2f-8e29-ca230cfe4921/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:07:12 crc kubenswrapper[4828]: I1205 20:07:12.128757 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d0bf5ea5-86ef-400d-a033-4bb5c31f61df/nova-metadata-log/0.log" Dec 05 20:07:12 crc kubenswrapper[4828]: I1205 20:07:12.396368 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e6c44e1b-fe99-4645-894d-8f7c89ec0ed2/nova-scheduler-scheduler/0.log" Dec 05 20:07:12 crc kubenswrapper[4828]: I1205 20:07:12.494604 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7064b569-c206-4ed9-8f28-3e5a7e92bf79/mysql-bootstrap/0.log" Dec 05 20:07:12 crc kubenswrapper[4828]: I1205 20:07:12.694432 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7064b569-c206-4ed9-8f28-3e5a7e92bf79/mysql-bootstrap/0.log" Dec 05 20:07:12 crc kubenswrapper[4828]: I1205 20:07:12.713420 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7064b569-c206-4ed9-8f28-3e5a7e92bf79/galera/0.log" Dec 05 20:07:12 crc kubenswrapper[4828]: I1205 20:07:12.935744 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a2debacb-a691-43ee-aa79-670bbec2a98a/mysql-bootstrap/0.log" Dec 05 20:07:13 crc kubenswrapper[4828]: I1205 20:07:13.177266 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a2debacb-a691-43ee-aa79-670bbec2a98a/mysql-bootstrap/0.log" Dec 05 20:07:13 crc kubenswrapper[4828]: I1205 20:07:13.205626 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a2debacb-a691-43ee-aa79-670bbec2a98a/galera/0.log" Dec 05 20:07:13 crc kubenswrapper[4828]: I1205 20:07:13.217745 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d0bf5ea5-86ef-400d-a033-4bb5c31f61df/nova-metadata-metadata/0.log" Dec 05 20:07:13 crc kubenswrapper[4828]: I1205 20:07:13.342160 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_847a8779-d691-4659-9166-a8f39abb55f4/openstackclient/0.log" Dec 05 20:07:13 crc kubenswrapper[4828]: I1205 20:07:13.490482 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-gqdhj_4ba9cffc-5e2b-44e9-966a-833ab0de45eb/openstack-network-exporter/0.log" Dec 05 20:07:13 crc kubenswrapper[4828]: I1205 20:07:13.579643 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l467t_3b912679-3c5e-4511-8769-8b8b4923d9fd/ovsdb-server-init/0.log" Dec 05 20:07:13 crc kubenswrapper[4828]: I1205 20:07:13.824698 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l467t_3b912679-3c5e-4511-8769-8b8b4923d9fd/ovsdb-server/0.log" Dec 05 20:07:13 crc kubenswrapper[4828]: I1205 20:07:13.848659 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l467t_3b912679-3c5e-4511-8769-8b8b4923d9fd/ovsdb-server-init/0.log" Dec 05 20:07:13 crc kubenswrapper[4828]: I1205 20:07:13.879287 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l467t_3b912679-3c5e-4511-8769-8b8b4923d9fd/ovs-vswitchd/0.log" Dec 05 20:07:14 crc kubenswrapper[4828]: I1205 20:07:14.040544 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-s6jdb_f88a4161-1271-4374-9740-eaea879d6561/ovn-controller/0.log" Dec 05 20:07:14 crc kubenswrapper[4828]: I1205 20:07:14.140384 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-57rf6_0ce437eb-13b3-49a9-adcf-874e3e672a8c/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:07:14 crc kubenswrapper[4828]: I1205 20:07:14.238848 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_bc0e095c-680f-45ec-96b2-3713515bc9c3/openstack-network-exporter/0.log" Dec 05 20:07:14 crc kubenswrapper[4828]: I1205 20:07:14.302257 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_bc0e095c-680f-45ec-96b2-3713515bc9c3/ovn-northd/0.log" Dec 05 20:07:14 crc kubenswrapper[4828]: I1205 20:07:14.469809 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7ac00d92-7825-4462-ab12-8d2059085d24/openstack-network-exporter/0.log" Dec 05 20:07:14 crc kubenswrapper[4828]: I1205 20:07:14.535762 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7ac00d92-7825-4462-ab12-8d2059085d24/ovsdbserver-nb/0.log" Dec 05 20:07:14 crc kubenswrapper[4828]: I1205 20:07:14.669225 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_31b675bd-ec74-4876-91a0-95e4180e8cab/ovsdbserver-sb/0.log" Dec 05 20:07:14 crc kubenswrapper[4828]: I1205 20:07:14.715964 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_31b675bd-ec74-4876-91a0-95e4180e8cab/openstack-network-exporter/0.log" Dec 05 20:07:14 crc kubenswrapper[4828]: I1205 20:07:14.810787 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6bfcc469f6-vtpj6_a9e67cf9-61e2-43a1-867a-a8f97ada16a4/placement-api/0.log" Dec 05 20:07:15 crc kubenswrapper[4828]: I1205 20:07:15.166049 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6bfcc469f6-vtpj6_a9e67cf9-61e2-43a1-867a-a8f97ada16a4/placement-log/0.log" Dec 05 20:07:15 crc kubenswrapper[4828]: I1205 20:07:15.184044 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_97ef01a4-c35c-41a0-abf1-1fbb83ff67e6/setup-container/0.log" Dec 05 20:07:15 crc kubenswrapper[4828]: I1205 20:07:15.376159 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_97ef01a4-c35c-41a0-abf1-1fbb83ff67e6/rabbitmq/0.log" Dec 05 20:07:15 crc kubenswrapper[4828]: I1205 20:07:15.455138 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_97ef01a4-c35c-41a0-abf1-1fbb83ff67e6/setup-container/0.log" Dec 05 20:07:15 crc kubenswrapper[4828]: I1205 20:07:15.536241 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_63ac6b69-a1ea-4b8d-9532-679d79cd1a87/setup-container/0.log" Dec 05 20:07:15 crc kubenswrapper[4828]: I1205 20:07:15.680660 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_63ac6b69-a1ea-4b8d-9532-679d79cd1a87/setup-container/0.log" Dec 05 20:07:15 crc kubenswrapper[4828]: I1205 20:07:15.696379 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_63ac6b69-a1ea-4b8d-9532-679d79cd1a87/rabbitmq/0.log" Dec 05 20:07:15 crc kubenswrapper[4828]: I1205 20:07:15.733477 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g_f51d93aa-b89c-4da8-b091-8a9888820e61/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:07:16 crc kubenswrapper[4828]: I1205 20:07:16.026504 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-5fhnz_096e625a-8244-411f-aaad-9746cf1e1878/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:07:16 crc kubenswrapper[4828]: I1205 20:07:16.065037 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss_a2df868b-dc23-4623-9203-42c91c9ff35b/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:07:16 crc kubenswrapper[4828]: I1205 20:07:16.338481 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-n5lc2_9b0ec9c6-c67f-45f2-be21-251c97a44a7e/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:07:16 crc kubenswrapper[4828]: I1205 20:07:16.341404 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-f242c_c7014005-08da-4204-96f5-163111e61315/ssh-known-hosts-edpm-deployment/0.log" Dec 05 20:07:16 crc kubenswrapper[4828]: I1205 20:07:16.617663 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7b86fcf7f7-wb4rw_6cac2917-5dee-4c64-a745-42e811cd735f/proxy-server/0.log" Dec 05 20:07:16 crc kubenswrapper[4828]: I1205 20:07:16.754345 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-vs6cm_784df3ad-b111-476d-ad5c-e10ee3e04b2f/swift-ring-rebalance/0.log" Dec 05 20:07:16 crc kubenswrapper[4828]: I1205 20:07:16.764448 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7b86fcf7f7-wb4rw_6cac2917-5dee-4c64-a745-42e811cd735f/proxy-httpd/0.log" Dec 05 20:07:16 crc kubenswrapper[4828]: I1205 20:07:16.866559 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_821554f9-a51e-4a16-a053-b8bc18d93a9e/account-auditor/0.log" Dec 05 20:07:16 crc kubenswrapper[4828]: I1205 20:07:16.943250 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_821554f9-a51e-4a16-a053-b8bc18d93a9e/account-reaper/0.log" Dec 05 20:07:16 crc kubenswrapper[4828]: I1205 20:07:16.967907 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_821554f9-a51e-4a16-a053-b8bc18d93a9e/account-replicator/0.log" Dec 05 20:07:17 crc kubenswrapper[4828]: I1205 20:07:17.070733 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_821554f9-a51e-4a16-a053-b8bc18d93a9e/account-server/0.log" Dec 05 20:07:17 crc kubenswrapper[4828]: I1205 20:07:17.100737 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_821554f9-a51e-4a16-a053-b8bc18d93a9e/container-auditor/0.log" Dec 05 20:07:17 crc kubenswrapper[4828]: I1205 20:07:17.192300 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_821554f9-a51e-4a16-a053-b8bc18d93a9e/container-replicator/0.log" Dec 05 20:07:17 crc kubenswrapper[4828]: I1205 20:07:17.198350 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_821554f9-a51e-4a16-a053-b8bc18d93a9e/container-server/0.log" Dec 05 20:07:17 crc kubenswrapper[4828]: I1205 20:07:17.293648 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_821554f9-a51e-4a16-a053-b8bc18d93a9e/container-updater/0.log" Dec 05 20:07:17 crc kubenswrapper[4828]: I1205 20:07:17.338712 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_821554f9-a51e-4a16-a053-b8bc18d93a9e/object-auditor/0.log" Dec 05 20:07:17 crc kubenswrapper[4828]: I1205 20:07:17.392703 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_821554f9-a51e-4a16-a053-b8bc18d93a9e/object-expirer/0.log" Dec 05 20:07:17 crc kubenswrapper[4828]: I1205 20:07:17.466561 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_821554f9-a51e-4a16-a053-b8bc18d93a9e/object-replicator/0.log" Dec 05 20:07:17 crc kubenswrapper[4828]: I1205 20:07:17.574025 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_821554f9-a51e-4a16-a053-b8bc18d93a9e/object-updater/0.log" Dec 05 20:07:17 crc kubenswrapper[4828]: I1205 20:07:17.585843 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_821554f9-a51e-4a16-a053-b8bc18d93a9e/object-server/0.log" Dec 05 20:07:17 crc kubenswrapper[4828]: I1205 20:07:17.596143 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_821554f9-a51e-4a16-a053-b8bc18d93a9e/rsync/0.log" Dec 05 20:07:17 crc kubenswrapper[4828]: I1205 20:07:17.726755 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_821554f9-a51e-4a16-a053-b8bc18d93a9e/swift-recon-cron/0.log" Dec 05 20:07:17 crc kubenswrapper[4828]: I1205 20:07:17.840225 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b_e0dde2a7-439b-4b5a-8e4b-363089a9879a/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:07:17 crc kubenswrapper[4828]: I1205 20:07:17.957399 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_9d71b946-ed36-403c-9faf-feb03f741474/tempest-tests-tempest-tests-runner/0.log" Dec 05 20:07:18 crc kubenswrapper[4828]: I1205 20:07:18.100130 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_9fe13abd-7133-4370-a848-17cea54271e1/test-operator-logs-container/0.log" Dec 05 20:07:18 crc kubenswrapper[4828]: I1205 20:07:18.237057 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz_a516aad0-97c7-46b3-b692-660dbd380bff/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:07:18 crc kubenswrapper[4828]: I1205 20:07:18.446159 4828 scope.go:117] "RemoveContainer" containerID="da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f" Dec 05 20:07:18 crc kubenswrapper[4828]: E1205 20:07:18.446462 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:07:25 crc kubenswrapper[4828]: I1205 20:07:25.574292 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_93282807-6c59-42db-9235-8b2097a8f7a9/memcached/0.log" Dec 05 20:07:26 crc kubenswrapper[4828]: I1205 20:07:26.487638 4828 generic.go:334] "Generic (PLEG): container finished" podID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" containerID="49b68706ff6cb133e98a8e73e383231ef31d28f2048427edba9ba321e824e434" exitCode=1 Dec 05 20:07:26 crc kubenswrapper[4828]: I1205 20:07:26.487717 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" event={"ID":"03c4fc5d-6be1-47b4-9c39-7bb86046dafd","Type":"ContainerDied","Data":"49b68706ff6cb133e98a8e73e383231ef31d28f2048427edba9ba321e824e434"} Dec 05 20:07:26 crc kubenswrapper[4828]: I1205 20:07:26.487780 4828 scope.go:117] "RemoveContainer" containerID="77ff2ec7eccf11eb6122ebe5e1a7ae6109a17dd78f66c6d40ffcb95bfd9d37f9" Dec 05 20:07:26 crc kubenswrapper[4828]: I1205 20:07:26.488484 4828 scope.go:117] "RemoveContainer" containerID="49b68706ff6cb133e98a8e73e383231ef31d28f2048427edba9ba321e824e434" Dec 05 20:07:26 crc kubenswrapper[4828]: E1205 20:07:26.488734 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:07:31 crc kubenswrapper[4828]: I1205 20:07:31.448224 4828 scope.go:117] "RemoveContainer" containerID="da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f" Dec 05 20:07:31 crc kubenswrapper[4828]: E1205 20:07:31.449059 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:07:35 crc kubenswrapper[4828]: I1205 20:07:35.118396 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 20:07:35 crc kubenswrapper[4828]: I1205 20:07:35.119702 4828 scope.go:117] "RemoveContainer" containerID="49b68706ff6cb133e98a8e73e383231ef31d28f2048427edba9ba321e824e434" Dec 05 20:07:35 crc kubenswrapper[4828]: E1205 20:07:35.119968 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:07:42 crc kubenswrapper[4828]: I1205 20:07:42.685594 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97_1363220d-423b-45a5-a067-559b8a36f610/util/0.log" Dec 05 20:07:42 crc kubenswrapper[4828]: I1205 20:07:42.850168 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97_1363220d-423b-45a5-a067-559b8a36f610/util/0.log" Dec 05 20:07:42 crc kubenswrapper[4828]: I1205 20:07:42.863609 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97_1363220d-423b-45a5-a067-559b8a36f610/pull/0.log" Dec 05 20:07:42 crc kubenswrapper[4828]: I1205 20:07:42.885076 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97_1363220d-423b-45a5-a067-559b8a36f610/pull/0.log" Dec 05 20:07:43 crc kubenswrapper[4828]: I1205 20:07:43.090999 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97_1363220d-423b-45a5-a067-559b8a36f610/extract/0.log" Dec 05 20:07:43 crc kubenswrapper[4828]: I1205 20:07:43.099869 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97_1363220d-423b-45a5-a067-559b8a36f610/util/0.log" Dec 05 20:07:43 crc kubenswrapper[4828]: I1205 20:07:43.165585 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97_1363220d-423b-45a5-a067-559b8a36f610/pull/0.log" Dec 05 20:07:43 crc kubenswrapper[4828]: I1205 20:07:43.254128 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-jbg6n_1f1ef15a-9832-4ee5-8077-066329f6180a/kube-rbac-proxy/0.log" Dec 05 20:07:43 crc kubenswrapper[4828]: I1205 20:07:43.368218 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-jbg6n_1f1ef15a-9832-4ee5-8077-066329f6180a/manager/0.log" Dec 05 20:07:43 crc kubenswrapper[4828]: I1205 20:07:43.377279 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-qftqg_4276bd34-acab-4936-a044-7d00e33e806f/kube-rbac-proxy/0.log" Dec 05 20:07:43 crc kubenswrapper[4828]: I1205 20:07:43.446129 4828 scope.go:117] "RemoveContainer" containerID="da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f" Dec 05 20:07:43 crc kubenswrapper[4828]: E1205 20:07:43.446431 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:07:43 crc kubenswrapper[4828]: I1205 20:07:43.490649 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-qftqg_4276bd34-acab-4936-a044-7d00e33e806f/manager/0.log" Dec 05 20:07:43 crc kubenswrapper[4828]: I1205 20:07:43.536095 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-cr94b_16bfe264-a5d1-433e-93ee-c6821e882c4c/kube-rbac-proxy/0.log" Dec 05 20:07:43 crc kubenswrapper[4828]: I1205 20:07:43.574258 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-cr94b_16bfe264-a5d1-433e-93ee-c6821e882c4c/manager/0.log" Dec 05 20:07:43 crc kubenswrapper[4828]: I1205 20:07:43.726216 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-v92pz_a27719f3-1ce1-4a2b-876f-f280966f8e8c/kube-rbac-proxy/0.log" Dec 05 20:07:43 crc kubenswrapper[4828]: I1205 20:07:43.800657 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-v92pz_a27719f3-1ce1-4a2b-876f-f280966f8e8c/manager/0.log" Dec 05 20:07:43 crc kubenswrapper[4828]: I1205 20:07:43.879949 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-k7qf5_f5bca056-89ff-4e36-82b7-ad44d9dc00d6/kube-rbac-proxy/0.log" Dec 05 20:07:43 crc kubenswrapper[4828]: I1205 20:07:43.928422 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-k7qf5_f5bca056-89ff-4e36-82b7-ad44d9dc00d6/manager/0.log" Dec 05 20:07:43 crc kubenswrapper[4828]: I1205 20:07:43.963886 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-g2wd4_7dbe4cda-8493-4e63-9544-7dfff2495c65/kube-rbac-proxy/0.log" Dec 05 20:07:44 crc kubenswrapper[4828]: I1205 20:07:44.082208 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-g2wd4_7dbe4cda-8493-4e63-9544-7dfff2495c65/manager/0.log" Dec 05 20:07:44 crc kubenswrapper[4828]: I1205 20:07:44.120799 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-575477cdfc-lrhm5_03c4fc5d-6be1-47b4-9c39-7bb86046dafd/kube-rbac-proxy/0.log" Dec 05 20:07:44 crc kubenswrapper[4828]: I1205 20:07:44.170679 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-575477cdfc-lrhm5_03c4fc5d-6be1-47b4-9c39-7bb86046dafd/manager/9.log" Dec 05 20:07:44 crc kubenswrapper[4828]: I1205 20:07:44.266758 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-575477cdfc-lrhm5_03c4fc5d-6be1-47b4-9c39-7bb86046dafd/manager/9.log" Dec 05 20:07:44 crc kubenswrapper[4828]: I1205 20:07:44.333221 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-9jbwm_04671bff-8616-471f-bd46-21e6b17227eb/kube-rbac-proxy/0.log" Dec 05 20:07:44 crc kubenswrapper[4828]: I1205 20:07:44.356243 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-9jbwm_04671bff-8616-471f-bd46-21e6b17227eb/manager/0.log" Dec 05 20:07:44 crc kubenswrapper[4828]: I1205 20:07:44.465877 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-pbmt2_1e335b54-f84b-4d91-a58e-0348728d171e/kube-rbac-proxy/0.log" Dec 05 20:07:44 crc kubenswrapper[4828]: I1205 20:07:44.578748 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-pbmt2_1e335b54-f84b-4d91-a58e-0348728d171e/manager/0.log" Dec 05 20:07:44 crc kubenswrapper[4828]: I1205 20:07:44.649748 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-cgkv6_3b18c18d-624d-4d50-95ba-a4f755f74936/kube-rbac-proxy/0.log" Dec 05 20:07:44 crc kubenswrapper[4828]: I1205 20:07:44.650213 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-cgkv6_3b18c18d-624d-4d50-95ba-a4f755f74936/manager/0.log" Dec 05 20:07:44 crc kubenswrapper[4828]: I1205 20:07:44.776716 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-77twz_c6d11d68-9609-432a-a855-4789df83739d/kube-rbac-proxy/0.log" Dec 05 20:07:44 crc kubenswrapper[4828]: I1205 20:07:44.857130 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-77twz_c6d11d68-9609-432a-a855-4789df83739d/manager/0.log" Dec 05 20:07:44 crc kubenswrapper[4828]: I1205 20:07:44.968416 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-gsczh_a03904e7-57be-4491-b11d-c8e698b718e6/kube-rbac-proxy/0.log" Dec 05 20:07:45 crc kubenswrapper[4828]: I1205 20:07:45.024181 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-gsczh_a03904e7-57be-4491-b11d-c8e698b718e6/manager/0.log" Dec 05 20:07:45 crc kubenswrapper[4828]: I1205 20:07:45.074616 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-l6gtp_757d5884-94d5-45f1-ae2c-49fd93ce512c/kube-rbac-proxy/0.log" Dec 05 20:07:45 crc kubenswrapper[4828]: I1205 20:07:45.117979 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 20:07:45 crc kubenswrapper[4828]: I1205 20:07:45.118811 4828 scope.go:117] "RemoveContainer" containerID="49b68706ff6cb133e98a8e73e383231ef31d28f2048427edba9ba321e824e434" Dec 05 20:07:45 crc kubenswrapper[4828]: E1205 20:07:45.119102 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:07:45 crc kubenswrapper[4828]: I1205 20:07:45.211687 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-l6gtp_757d5884-94d5-45f1-ae2c-49fd93ce512c/manager/0.log" Dec 05 20:07:45 crc kubenswrapper[4828]: I1205 20:07:45.250642 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-cfnbh_a5d6b211-6f88-45fe-8e38-608271465dfe/kube-rbac-proxy/0.log" Dec 05 20:07:45 crc kubenswrapper[4828]: I1205 20:07:45.303678 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-cfnbh_a5d6b211-6f88-45fe-8e38-608271465dfe/manager/0.log" Dec 05 20:07:45 crc kubenswrapper[4828]: I1205 20:07:45.397710 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j_1ce74c6c-ee96-4712-983f-4090e176f31e/kube-rbac-proxy/0.log" Dec 05 20:07:45 crc kubenswrapper[4828]: I1205 20:07:45.415495 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j_1ce74c6c-ee96-4712-983f-4090e176f31e/manager/0.log" Dec 05 20:07:45 crc kubenswrapper[4828]: I1205 20:07:45.814405 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-mn9b4_850c5dc4-1658-4c59-96eb-999fb7392164/registry-server/0.log" Dec 05 20:07:46 crc kubenswrapper[4828]: I1205 20:07:46.008787 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-4gr5g_ba58375c-b3fa-4eb8-8813-c55f003674ca/kube-rbac-proxy/0.log" Dec 05 20:07:46 crc kubenswrapper[4828]: I1205 20:07:46.079702 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-56d574f77c-99sf5_4ceee1c7-178c-4496-9cdd-c302d5180aca/operator/0.log" Dec 05 20:07:46 crc kubenswrapper[4828]: I1205 20:07:46.309052 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-4gr5g_ba58375c-b3fa-4eb8-8813-c55f003674ca/manager/0.log" Dec 05 20:07:46 crc kubenswrapper[4828]: I1205 20:07:46.319634 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-cf5gg_cd2986fb-f299-446c-85b7-28427df0ca51/kube-rbac-proxy/0.log" Dec 05 20:07:46 crc kubenswrapper[4828]: I1205 20:07:46.539892 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-w4rgq_dabc71c3-947a-4d4c-90bd-b5bb473ce013/operator/0.log" Dec 05 20:07:46 crc kubenswrapper[4828]: I1205 20:07:46.558728 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-cf5gg_cd2986fb-f299-446c-85b7-28427df0ca51/manager/0.log" Dec 05 20:07:46 crc kubenswrapper[4828]: I1205 20:07:46.725288 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-6xg2c_48135908-b8f6-47ab-aeb7-3f74bb3e2cde/kube-rbac-proxy/0.log" Dec 05 20:07:46 crc kubenswrapper[4828]: I1205 20:07:46.725629 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-d5958f94b-76zjx_408ecf49-524f-4743-9cef-5c65877dd176/manager/0.log" Dec 05 20:07:46 crc kubenswrapper[4828]: I1205 20:07:46.760108 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-6xg2c_48135908-b8f6-47ab-aeb7-3f74bb3e2cde/manager/0.log" Dec 05 20:07:46 crc kubenswrapper[4828]: I1205 20:07:46.838716 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-hdxm9_8c1110f4-40af-416e-9624-22a901897000/kube-rbac-proxy/0.log" Dec 05 20:07:46 crc kubenswrapper[4828]: I1205 20:07:46.952158 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-hdxm9_8c1110f4-40af-416e-9624-22a901897000/manager/0.log" Dec 05 20:07:46 crc kubenswrapper[4828]: I1205 20:07:46.952605 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-h2d97_13474ecf-c76e-400f-bc72-70c11ab8356b/kube-rbac-proxy/0.log" Dec 05 20:07:47 crc kubenswrapper[4828]: I1205 20:07:47.009416 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-h2d97_13474ecf-c76e-400f-bc72-70c11ab8356b/manager/0.log" Dec 05 20:07:47 crc kubenswrapper[4828]: I1205 20:07:47.064394 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-qdslg_bf305ed3-e27f-42bc-9fb7-bec903ca820f/kube-rbac-proxy/0.log" Dec 05 20:07:47 crc kubenswrapper[4828]: I1205 20:07:47.119136 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-qdslg_bf305ed3-e27f-42bc-9fb7-bec903ca820f/manager/0.log" Dec 05 20:07:54 crc kubenswrapper[4828]: I1205 20:07:54.449220 4828 scope.go:117] "RemoveContainer" containerID="da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f" Dec 05 20:07:54 crc kubenswrapper[4828]: E1205 20:07:54.449913 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:07:59 crc kubenswrapper[4828]: I1205 20:07:59.447245 4828 scope.go:117] "RemoveContainer" containerID="49b68706ff6cb133e98a8e73e383231ef31d28f2048427edba9ba321e824e434" Dec 05 20:07:59 crc kubenswrapper[4828]: E1205 20:07:59.448040 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:08:04 crc kubenswrapper[4828]: I1205 20:08:04.771640 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-kn6kp_6ad1915a-9298-4aba-928b-5d3c7d57a7bb/control-plane-machine-set-operator/0.log" Dec 05 20:08:04 crc kubenswrapper[4828]: I1205 20:08:04.956169 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vbgcx_e5365032-f31f-4e90-bb94-193e5d6dcc9f/kube-rbac-proxy/0.log" Dec 05 20:08:04 crc kubenswrapper[4828]: I1205 20:08:04.968896 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vbgcx_e5365032-f31f-4e90-bb94-193e5d6dcc9f/machine-api-operator/0.log" Dec 05 20:08:07 crc kubenswrapper[4828]: I1205 20:08:07.447328 4828 scope.go:117] "RemoveContainer" containerID="da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f" Dec 05 20:08:07 crc kubenswrapper[4828]: E1205 20:08:07.448124 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:08:14 crc kubenswrapper[4828]: I1205 20:08:14.448885 4828 scope.go:117] "RemoveContainer" containerID="49b68706ff6cb133e98a8e73e383231ef31d28f2048427edba9ba321e824e434" Dec 05 20:08:14 crc kubenswrapper[4828]: E1205 20:08:14.449846 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:08:17 crc kubenswrapper[4828]: I1205 20:08:17.040259 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-7srhj_3d6b347b-b532-43b5-b0d4-8c40b7962156/cert-manager-controller/0.log" Dec 05 20:08:17 crc kubenswrapper[4828]: I1205 20:08:17.103175 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-49xx5_c8cac37c-e093-48f2-b1da-eaf62bf95bfd/cert-manager-cainjector/0.log" Dec 05 20:08:17 crc kubenswrapper[4828]: I1205 20:08:17.180941 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-n596v_2fcb84c0-5e6a-45ee-9c06-f0a12a1ef15b/cert-manager-webhook/0.log" Dec 05 20:08:18 crc kubenswrapper[4828]: I1205 20:08:18.446373 4828 scope.go:117] "RemoveContainer" containerID="da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f" Dec 05 20:08:18 crc kubenswrapper[4828]: E1205 20:08:18.446935 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:08:27 crc kubenswrapper[4828]: I1205 20:08:27.447454 4828 scope.go:117] "RemoveContainer" containerID="49b68706ff6cb133e98a8e73e383231ef31d28f2048427edba9ba321e824e434" Dec 05 20:08:27 crc kubenswrapper[4828]: E1205 20:08:27.449228 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:08:29 crc kubenswrapper[4828]: I1205 20:08:29.189903 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-hbrt4_96df2436-0a55-4b21-900b-dfedbafa290d/nmstate-console-plugin/0.log" Dec 05 20:08:29 crc kubenswrapper[4828]: I1205 20:08:29.361089 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-lmln5_7bd0a6fb-88d4-4e0a-9ac1-d9334b2f91b8/nmstate-handler/0.log" Dec 05 20:08:29 crc kubenswrapper[4828]: I1205 20:08:29.373903 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-jzb4h_5e07f179-6cb8-4771-894c-7ad6c2ee6b10/kube-rbac-proxy/0.log" Dec 05 20:08:29 crc kubenswrapper[4828]: I1205 20:08:29.472036 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-jzb4h_5e07f179-6cb8-4771-894c-7ad6c2ee6b10/nmstate-metrics/0.log" Dec 05 20:08:29 crc kubenswrapper[4828]: I1205 20:08:29.555667 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-2zdc5_1691d52d-868b-4121-8863-2a59db739b1b/nmstate-operator/0.log" Dec 05 20:08:29 crc kubenswrapper[4828]: I1205 20:08:29.665733 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-j5ps8_c535354b-ac85-4a30-9f7d-1547f2db8fbc/nmstate-webhook/0.log" Dec 05 20:08:31 crc kubenswrapper[4828]: I1205 20:08:31.446908 4828 scope.go:117] "RemoveContainer" containerID="da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f" Dec 05 20:08:31 crc kubenswrapper[4828]: E1205 20:08:31.447374 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:08:35 crc kubenswrapper[4828]: I1205 20:08:35.025563 4828 scope.go:117] "RemoveContainer" containerID="d251e6b2c784d25caf6570e8444f912603011b0363c929ff83bb1d3d74847ee4" Dec 05 20:08:35 crc kubenswrapper[4828]: I1205 20:08:35.069685 4828 scope.go:117] "RemoveContainer" containerID="708842c699d5c61957a8989263433da42a1e079f5e2d26415377ae1c152017ed" Dec 05 20:08:35 crc kubenswrapper[4828]: I1205 20:08:35.098737 4828 scope.go:117] "RemoveContainer" containerID="d2e809e152b2196e15f7b7fc12df9bf53e48b473b1d8614bf4335d07ed6a0846" Dec 05 20:08:41 crc kubenswrapper[4828]: I1205 20:08:41.446598 4828 scope.go:117] "RemoveContainer" containerID="49b68706ff6cb133e98a8e73e383231ef31d28f2048427edba9ba321e824e434" Dec 05 20:08:41 crc kubenswrapper[4828]: E1205 20:08:41.447429 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:08:43 crc kubenswrapper[4828]: I1205 20:08:43.446471 4828 scope.go:117] "RemoveContainer" containerID="da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f" Dec 05 20:08:43 crc kubenswrapper[4828]: E1205 20:08:43.447097 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:08:45 crc kubenswrapper[4828]: I1205 20:08:45.019017 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-t5wnc_35cb1f63-dbf8-4451-adff-4b35840e5498/controller/0.log" Dec 05 20:08:45 crc kubenswrapper[4828]: I1205 20:08:45.059200 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-t5wnc_35cb1f63-dbf8-4451-adff-4b35840e5498/kube-rbac-proxy/0.log" Dec 05 20:08:45 crc kubenswrapper[4828]: I1205 20:08:45.207778 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/cp-frr-files/0.log" Dec 05 20:08:45 crc kubenswrapper[4828]: I1205 20:08:45.355235 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/cp-frr-files/0.log" Dec 05 20:08:45 crc kubenswrapper[4828]: I1205 20:08:45.371257 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/cp-reloader/0.log" Dec 05 20:08:45 crc kubenswrapper[4828]: I1205 20:08:45.377955 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/cp-metrics/0.log" Dec 05 20:08:45 crc kubenswrapper[4828]: I1205 20:08:45.392073 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/cp-reloader/0.log" Dec 05 20:08:45 crc kubenswrapper[4828]: I1205 20:08:45.541947 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/cp-frr-files/0.log" Dec 05 20:08:45 crc kubenswrapper[4828]: I1205 20:08:45.542070 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/cp-metrics/0.log" Dec 05 20:08:45 crc kubenswrapper[4828]: I1205 20:08:45.574920 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/cp-reloader/0.log" Dec 05 20:08:45 crc kubenswrapper[4828]: I1205 20:08:45.591247 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/cp-metrics/0.log" Dec 05 20:08:45 crc kubenswrapper[4828]: I1205 20:08:45.743255 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/cp-frr-files/0.log" Dec 05 20:08:45 crc kubenswrapper[4828]: I1205 20:08:45.761673 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/cp-metrics/0.log" Dec 05 20:08:45 crc kubenswrapper[4828]: I1205 20:08:45.762030 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/cp-reloader/0.log" Dec 05 20:08:45 crc kubenswrapper[4828]: I1205 20:08:45.796364 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/controller/0.log" Dec 05 20:08:45 crc kubenswrapper[4828]: I1205 20:08:45.933228 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/kube-rbac-proxy/0.log" Dec 05 20:08:45 crc kubenswrapper[4828]: I1205 20:08:45.946928 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/frr-metrics/0.log" Dec 05 20:08:45 crc kubenswrapper[4828]: I1205 20:08:45.988566 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/kube-rbac-proxy-frr/0.log" Dec 05 20:08:46 crc kubenswrapper[4828]: I1205 20:08:46.184989 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/reloader/0.log" Dec 05 20:08:46 crc kubenswrapper[4828]: I1205 20:08:46.226384 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-w8vp2_67e4c769-0905-4c8f-8fc0-2488346fe188/frr-k8s-webhook-server/0.log" Dec 05 20:08:46 crc kubenswrapper[4828]: I1205 20:08:46.508045 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-cccfd6bcb-v7d55_b8cd7c76-c03d-4f9a-9e3e-d982c39d92c2/manager/0.log" Dec 05 20:08:46 crc kubenswrapper[4828]: I1205 20:08:46.620781 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-8f476869-jcl7n_88807e96-2cf3-4ab4-863d-48538fac8bc8/webhook-server/0.log" Dec 05 20:08:46 crc kubenswrapper[4828]: I1205 20:08:46.751681 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gqq4l_8a121072-6f44-4a42-b9b1-a54d8d04fea4/kube-rbac-proxy/0.log" Dec 05 20:08:47 crc kubenswrapper[4828]: I1205 20:08:47.285040 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gqq4l_8a121072-6f44-4a42-b9b1-a54d8d04fea4/speaker/0.log" Dec 05 20:08:47 crc kubenswrapper[4828]: I1205 20:08:47.317600 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/frr/0.log" Dec 05 20:08:53 crc kubenswrapper[4828]: I1205 20:08:53.446759 4828 scope.go:117] "RemoveContainer" containerID="49b68706ff6cb133e98a8e73e383231ef31d28f2048427edba9ba321e824e434" Dec 05 20:08:53 crc kubenswrapper[4828]: E1205 20:08:53.447665 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:08:55 crc kubenswrapper[4828]: I1205 20:08:55.446662 4828 scope.go:117] "RemoveContainer" containerID="da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f" Dec 05 20:08:55 crc kubenswrapper[4828]: E1205 20:08:55.447411 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:08:58 crc kubenswrapper[4828]: I1205 20:08:58.694856 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj_800451e0-a385-4d99-ab2d-706b98d39f8d/util/0.log" Dec 05 20:08:58 crc kubenswrapper[4828]: I1205 20:08:58.855983 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj_800451e0-a385-4d99-ab2d-706b98d39f8d/pull/0.log" Dec 05 20:08:58 crc kubenswrapper[4828]: I1205 20:08:58.858860 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj_800451e0-a385-4d99-ab2d-706b98d39f8d/util/0.log" Dec 05 20:08:58 crc kubenswrapper[4828]: I1205 20:08:58.907663 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj_800451e0-a385-4d99-ab2d-706b98d39f8d/pull/0.log" Dec 05 20:08:59 crc kubenswrapper[4828]: I1205 20:08:59.048256 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj_800451e0-a385-4d99-ab2d-706b98d39f8d/pull/0.log" Dec 05 20:08:59 crc kubenswrapper[4828]: I1205 20:08:59.086343 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj_800451e0-a385-4d99-ab2d-706b98d39f8d/util/0.log" Dec 05 20:08:59 crc kubenswrapper[4828]: I1205 20:08:59.088640 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj_800451e0-a385-4d99-ab2d-706b98d39f8d/extract/0.log" Dec 05 20:08:59 crc kubenswrapper[4828]: I1205 20:08:59.222298 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz_80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0/util/0.log" Dec 05 20:08:59 crc kubenswrapper[4828]: I1205 20:08:59.395417 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz_80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0/pull/0.log" Dec 05 20:08:59 crc kubenswrapper[4828]: I1205 20:08:59.434057 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz_80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0/pull/0.log" Dec 05 20:08:59 crc kubenswrapper[4828]: I1205 20:08:59.440277 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz_80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0/util/0.log" Dec 05 20:08:59 crc kubenswrapper[4828]: I1205 20:08:59.601306 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz_80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0/util/0.log" Dec 05 20:08:59 crc kubenswrapper[4828]: I1205 20:08:59.602430 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz_80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0/pull/0.log" Dec 05 20:08:59 crc kubenswrapper[4828]: I1205 20:08:59.604615 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz_80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0/extract/0.log" Dec 05 20:08:59 crc kubenswrapper[4828]: I1205 20:08:59.755734 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2tlrr_1e56fecb-3765-4d29-9c94-02257c7e655b/extract-utilities/0.log" Dec 05 20:08:59 crc kubenswrapper[4828]: I1205 20:08:59.946779 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2tlrr_1e56fecb-3765-4d29-9c94-02257c7e655b/extract-content/0.log" Dec 05 20:08:59 crc kubenswrapper[4828]: I1205 20:08:59.978905 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2tlrr_1e56fecb-3765-4d29-9c94-02257c7e655b/extract-content/0.log" Dec 05 20:08:59 crc kubenswrapper[4828]: I1205 20:08:59.979072 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2tlrr_1e56fecb-3765-4d29-9c94-02257c7e655b/extract-utilities/0.log" Dec 05 20:09:00 crc kubenswrapper[4828]: I1205 20:09:00.137918 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2tlrr_1e56fecb-3765-4d29-9c94-02257c7e655b/extract-utilities/0.log" Dec 05 20:09:00 crc kubenswrapper[4828]: I1205 20:09:00.148405 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2tlrr_1e56fecb-3765-4d29-9c94-02257c7e655b/extract-content/0.log" Dec 05 20:09:00 crc kubenswrapper[4828]: I1205 20:09:00.330614 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bch5n_d1b4b588-b3c8-4a99-b13c-89413002545e/extract-utilities/0.log" Dec 05 20:09:00 crc kubenswrapper[4828]: I1205 20:09:00.526234 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bch5n_d1b4b588-b3c8-4a99-b13c-89413002545e/extract-content/0.log" Dec 05 20:09:00 crc kubenswrapper[4828]: I1205 20:09:00.577105 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bch5n_d1b4b588-b3c8-4a99-b13c-89413002545e/extract-utilities/0.log" Dec 05 20:09:00 crc kubenswrapper[4828]: I1205 20:09:00.579909 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bch5n_d1b4b588-b3c8-4a99-b13c-89413002545e/extract-content/0.log" Dec 05 20:09:00 crc kubenswrapper[4828]: I1205 20:09:00.707904 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2tlrr_1e56fecb-3765-4d29-9c94-02257c7e655b/registry-server/0.log" Dec 05 20:09:00 crc kubenswrapper[4828]: I1205 20:09:00.782999 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bch5n_d1b4b588-b3c8-4a99-b13c-89413002545e/extract-utilities/0.log" Dec 05 20:09:00 crc kubenswrapper[4828]: I1205 20:09:00.850060 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bch5n_d1b4b588-b3c8-4a99-b13c-89413002545e/extract-content/0.log" Dec 05 20:09:01 crc kubenswrapper[4828]: I1205 20:09:01.031393 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-9dx6f_57b2cfb8-3ff2-4192-a272-2d6ef4ced1cd/marketplace-operator/0.log" Dec 05 20:09:01 crc kubenswrapper[4828]: I1205 20:09:01.139921 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pmljs_4a4ba139-0d26-4f2c-b265-35af463685f1/extract-utilities/0.log" Dec 05 20:09:01 crc kubenswrapper[4828]: I1205 20:09:01.349457 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pmljs_4a4ba139-0d26-4f2c-b265-35af463685f1/extract-content/0.log" Dec 05 20:09:01 crc kubenswrapper[4828]: I1205 20:09:01.417811 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pmljs_4a4ba139-0d26-4f2c-b265-35af463685f1/extract-content/0.log" Dec 05 20:09:01 crc kubenswrapper[4828]: I1205 20:09:01.424627 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pmljs_4a4ba139-0d26-4f2c-b265-35af463685f1/extract-utilities/0.log" Dec 05 20:09:01 crc kubenswrapper[4828]: I1205 20:09:01.612811 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pmljs_4a4ba139-0d26-4f2c-b265-35af463685f1/extract-content/0.log" Dec 05 20:09:01 crc kubenswrapper[4828]: I1205 20:09:01.626891 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pmljs_4a4ba139-0d26-4f2c-b265-35af463685f1/extract-utilities/0.log" Dec 05 20:09:01 crc kubenswrapper[4828]: I1205 20:09:01.646092 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bch5n_d1b4b588-b3c8-4a99-b13c-89413002545e/registry-server/0.log" Dec 05 20:09:01 crc kubenswrapper[4828]: I1205 20:09:01.818330 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ztsk4_8a85d300-213b-4bda-aff7-73bc53e7e246/extract-utilities/0.log" Dec 05 20:09:01 crc kubenswrapper[4828]: I1205 20:09:01.845283 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pmljs_4a4ba139-0d26-4f2c-b265-35af463685f1/registry-server/0.log" Dec 05 20:09:01 crc kubenswrapper[4828]: I1205 20:09:01.993615 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ztsk4_8a85d300-213b-4bda-aff7-73bc53e7e246/extract-utilities/0.log" Dec 05 20:09:02 crc kubenswrapper[4828]: I1205 20:09:02.023917 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ztsk4_8a85d300-213b-4bda-aff7-73bc53e7e246/extract-content/0.log" Dec 05 20:09:02 crc kubenswrapper[4828]: I1205 20:09:02.045687 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ztsk4_8a85d300-213b-4bda-aff7-73bc53e7e246/extract-content/0.log" Dec 05 20:09:02 crc kubenswrapper[4828]: I1205 20:09:02.251970 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ztsk4_8a85d300-213b-4bda-aff7-73bc53e7e246/extract-utilities/0.log" Dec 05 20:09:02 crc kubenswrapper[4828]: I1205 20:09:02.311633 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ztsk4_8a85d300-213b-4bda-aff7-73bc53e7e246/extract-content/0.log" Dec 05 20:09:03 crc kubenswrapper[4828]: I1205 20:09:03.604815 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ztsk4_8a85d300-213b-4bda-aff7-73bc53e7e246/registry-server/0.log" Dec 05 20:09:06 crc kubenswrapper[4828]: I1205 20:09:06.448064 4828 scope.go:117] "RemoveContainer" containerID="49b68706ff6cb133e98a8e73e383231ef31d28f2048427edba9ba321e824e434" Dec 05 20:09:06 crc kubenswrapper[4828]: E1205 20:09:06.448714 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:09:10 crc kubenswrapper[4828]: I1205 20:09:10.456772 4828 scope.go:117] "RemoveContainer" containerID="da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f" Dec 05 20:09:11 crc kubenswrapper[4828]: I1205 20:09:11.604109 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerStarted","Data":"d68143ac91ffa442f6544d88d704ba189350bba12267b53e7e8314035ce28693"} Dec 05 20:09:17 crc kubenswrapper[4828]: I1205 20:09:17.446852 4828 scope.go:117] "RemoveContainer" containerID="49b68706ff6cb133e98a8e73e383231ef31d28f2048427edba9ba321e824e434" Dec 05 20:09:17 crc kubenswrapper[4828]: E1205 20:09:17.447652 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:09:28 crc kubenswrapper[4828]: I1205 20:09:28.446107 4828 scope.go:117] "RemoveContainer" containerID="49b68706ff6cb133e98a8e73e383231ef31d28f2048427edba9ba321e824e434" Dec 05 20:09:28 crc kubenswrapper[4828]: E1205 20:09:28.446942 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:09:42 crc kubenswrapper[4828]: I1205 20:09:42.477211 4828 scope.go:117] "RemoveContainer" containerID="49b68706ff6cb133e98a8e73e383231ef31d28f2048427edba9ba321e824e434" Dec 05 20:09:42 crc kubenswrapper[4828]: E1205 20:09:42.479144 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:09:53 crc kubenswrapper[4828]: I1205 20:09:53.447386 4828 scope.go:117] "RemoveContainer" containerID="49b68706ff6cb133e98a8e73e383231ef31d28f2048427edba9ba321e824e434" Dec 05 20:09:53 crc kubenswrapper[4828]: E1205 20:09:53.448183 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:10:06 crc kubenswrapper[4828]: I1205 20:10:06.450572 4828 scope.go:117] "RemoveContainer" containerID="49b68706ff6cb133e98a8e73e383231ef31d28f2048427edba9ba321e824e434" Dec 05 20:10:06 crc kubenswrapper[4828]: E1205 20:10:06.451294 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:10:20 crc kubenswrapper[4828]: I1205 20:10:20.447471 4828 scope.go:117] "RemoveContainer" containerID="49b68706ff6cb133e98a8e73e383231ef31d28f2048427edba9ba321e824e434" Dec 05 20:10:20 crc kubenswrapper[4828]: E1205 20:10:20.448206 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:10:35 crc kubenswrapper[4828]: I1205 20:10:35.448927 4828 scope.go:117] "RemoveContainer" containerID="49b68706ff6cb133e98a8e73e383231ef31d28f2048427edba9ba321e824e434" Dec 05 20:10:35 crc kubenswrapper[4828]: E1205 20:10:35.450068 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:10:44 crc kubenswrapper[4828]: I1205 20:10:44.510957 4828 generic.go:334] "Generic (PLEG): container finished" podID="bd62c8e0-605f-4e05-89a2-042ba85ba53d" containerID="6af36ae01c9a75df0559af7f0d56ac6eb776a1a0e61db8773d285760a4f21c6c" exitCode=0 Dec 05 20:10:44 crc kubenswrapper[4828]: I1205 20:10:44.511074 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4mfks/must-gather-tdlzv" event={"ID":"bd62c8e0-605f-4e05-89a2-042ba85ba53d","Type":"ContainerDied","Data":"6af36ae01c9a75df0559af7f0d56ac6eb776a1a0e61db8773d285760a4f21c6c"} Dec 05 20:10:44 crc kubenswrapper[4828]: I1205 20:10:44.512393 4828 scope.go:117] "RemoveContainer" containerID="6af36ae01c9a75df0559af7f0d56ac6eb776a1a0e61db8773d285760a4f21c6c" Dec 05 20:10:45 crc kubenswrapper[4828]: I1205 20:10:45.040922 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4mfks_must-gather-tdlzv_bd62c8e0-605f-4e05-89a2-042ba85ba53d/gather/0.log" Dec 05 20:10:46 crc kubenswrapper[4828]: I1205 20:10:46.446642 4828 scope.go:117] "RemoveContainer" containerID="49b68706ff6cb133e98a8e73e383231ef31d28f2048427edba9ba321e824e434" Dec 05 20:10:46 crc kubenswrapper[4828]: E1205 20:10:46.447108 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:10:52 crc kubenswrapper[4828]: I1205 20:10:52.204619 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4mfks/must-gather-tdlzv"] Dec 05 20:10:52 crc kubenswrapper[4828]: I1205 20:10:52.205259 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4mfks/must-gather-tdlzv"] Dec 05 20:10:52 crc kubenswrapper[4828]: I1205 20:10:52.205472 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-4mfks/must-gather-tdlzv" podUID="bd62c8e0-605f-4e05-89a2-042ba85ba53d" containerName="copy" containerID="cri-o://8284089aebfb9042cf6aae51a8f81b800e28d2df860d04d0cb5651e3ae200c13" gracePeriod=2 Dec 05 20:10:52 crc kubenswrapper[4828]: E1205 20:10:52.281866 4828 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd62c8e0_605f_4e05_89a2_042ba85ba53d.slice/crio-conmon-8284089aebfb9042cf6aae51a8f81b800e28d2df860d04d0cb5651e3ae200c13.scope\": RecentStats: unable to find data in memory cache]" Dec 05 20:10:52 crc kubenswrapper[4828]: I1205 20:10:52.616205 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4mfks_must-gather-tdlzv_bd62c8e0-605f-4e05-89a2-042ba85ba53d/copy/0.log" Dec 05 20:10:52 crc kubenswrapper[4828]: I1205 20:10:52.617034 4828 generic.go:334] "Generic (PLEG): container finished" podID="bd62c8e0-605f-4e05-89a2-042ba85ba53d" containerID="8284089aebfb9042cf6aae51a8f81b800e28d2df860d04d0cb5651e3ae200c13" exitCode=143 Dec 05 20:10:52 crc kubenswrapper[4828]: I1205 20:10:52.617114 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a23e46b9a0d1653fe4e01e9d9d59e85d1204a2fb1fb511400c09a3a5f78f42d" Dec 05 20:10:52 crc kubenswrapper[4828]: I1205 20:10:52.702043 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4mfks_must-gather-tdlzv_bd62c8e0-605f-4e05-89a2-042ba85ba53d/copy/0.log" Dec 05 20:10:52 crc kubenswrapper[4828]: I1205 20:10:52.702650 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4mfks/must-gather-tdlzv" Dec 05 20:10:52 crc kubenswrapper[4828]: I1205 20:10:52.780924 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhgns\" (UniqueName: \"kubernetes.io/projected/bd62c8e0-605f-4e05-89a2-042ba85ba53d-kube-api-access-nhgns\") pod \"bd62c8e0-605f-4e05-89a2-042ba85ba53d\" (UID: \"bd62c8e0-605f-4e05-89a2-042ba85ba53d\") " Dec 05 20:10:52 crc kubenswrapper[4828]: I1205 20:10:52.781201 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bd62c8e0-605f-4e05-89a2-042ba85ba53d-must-gather-output\") pod \"bd62c8e0-605f-4e05-89a2-042ba85ba53d\" (UID: \"bd62c8e0-605f-4e05-89a2-042ba85ba53d\") " Dec 05 20:10:52 crc kubenswrapper[4828]: I1205 20:10:52.787340 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd62c8e0-605f-4e05-89a2-042ba85ba53d-kube-api-access-nhgns" (OuterVolumeSpecName: "kube-api-access-nhgns") pod "bd62c8e0-605f-4e05-89a2-042ba85ba53d" (UID: "bd62c8e0-605f-4e05-89a2-042ba85ba53d"). InnerVolumeSpecName "kube-api-access-nhgns". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 20:10:52 crc kubenswrapper[4828]: I1205 20:10:52.883801 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhgns\" (UniqueName: \"kubernetes.io/projected/bd62c8e0-605f-4e05-89a2-042ba85ba53d-kube-api-access-nhgns\") on node \"crc\" DevicePath \"\"" Dec 05 20:10:52 crc kubenswrapper[4828]: I1205 20:10:52.925137 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd62c8e0-605f-4e05-89a2-042ba85ba53d-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "bd62c8e0-605f-4e05-89a2-042ba85ba53d" (UID: "bd62c8e0-605f-4e05-89a2-042ba85ba53d"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 20:10:52 crc kubenswrapper[4828]: I1205 20:10:52.985729 4828 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bd62c8e0-605f-4e05-89a2-042ba85ba53d-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 05 20:10:53 crc kubenswrapper[4828]: I1205 20:10:53.650225 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4mfks/must-gather-tdlzv" Dec 05 20:10:54 crc kubenswrapper[4828]: I1205 20:10:54.456702 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd62c8e0-605f-4e05-89a2-042ba85ba53d" path="/var/lib/kubelet/pods/bd62c8e0-605f-4e05-89a2-042ba85ba53d/volumes" Dec 05 20:10:58 crc kubenswrapper[4828]: I1205 20:10:58.446663 4828 scope.go:117] "RemoveContainer" containerID="49b68706ff6cb133e98a8e73e383231ef31d28f2048427edba9ba321e824e434" Dec 05 20:10:58 crc kubenswrapper[4828]: E1205 20:10:58.447915 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:11:10 crc kubenswrapper[4828]: I1205 20:11:10.447148 4828 scope.go:117] "RemoveContainer" containerID="49b68706ff6cb133e98a8e73e383231ef31d28f2048427edba9ba321e824e434" Dec 05 20:11:10 crc kubenswrapper[4828]: E1205 20:11:10.447887 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:11:24 crc kubenswrapper[4828]: I1205 20:11:24.446726 4828 scope.go:117] "RemoveContainer" containerID="49b68706ff6cb133e98a8e73e383231ef31d28f2048427edba9ba321e824e434" Dec 05 20:11:24 crc kubenswrapper[4828]: E1205 20:11:24.447769 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:11:35 crc kubenswrapper[4828]: I1205 20:11:35.259997 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 20:11:35 crc kubenswrapper[4828]: I1205 20:11:35.260454 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 20:11:35 crc kubenswrapper[4828]: I1205 20:11:35.447337 4828 scope.go:117] "RemoveContainer" containerID="49b68706ff6cb133e98a8e73e383231ef31d28f2048427edba9ba321e824e434" Dec 05 20:11:35 crc kubenswrapper[4828]: E1205 20:11:35.447911 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:11:47 crc kubenswrapper[4828]: I1205 20:11:47.448372 4828 scope.go:117] "RemoveContainer" containerID="49b68706ff6cb133e98a8e73e383231ef31d28f2048427edba9ba321e824e434" Dec 05 20:11:47 crc kubenswrapper[4828]: E1205 20:11:47.449357 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:12:00 crc kubenswrapper[4828]: I1205 20:12:00.313728 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4r6xv"] Dec 05 20:12:00 crc kubenswrapper[4828]: E1205 20:12:00.314790 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62c8e0-605f-4e05-89a2-042ba85ba53d" containerName="copy" Dec 05 20:12:00 crc kubenswrapper[4828]: I1205 20:12:00.314808 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62c8e0-605f-4e05-89a2-042ba85ba53d" containerName="copy" Dec 05 20:12:00 crc kubenswrapper[4828]: E1205 20:12:00.314845 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="859af17e-b617-46f1-96a9-e819f88c632f" containerName="extract-utilities" Dec 05 20:12:00 crc kubenswrapper[4828]: I1205 20:12:00.314854 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="859af17e-b617-46f1-96a9-e819f88c632f" containerName="extract-utilities" Dec 05 20:12:00 crc kubenswrapper[4828]: E1205 20:12:00.314885 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="859af17e-b617-46f1-96a9-e819f88c632f" containerName="extract-content" Dec 05 20:12:00 crc kubenswrapper[4828]: I1205 20:12:00.314893 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="859af17e-b617-46f1-96a9-e819f88c632f" containerName="extract-content" Dec 05 20:12:00 crc kubenswrapper[4828]: E1205 20:12:00.314913 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62c8e0-605f-4e05-89a2-042ba85ba53d" containerName="gather" Dec 05 20:12:00 crc kubenswrapper[4828]: I1205 20:12:00.314922 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62c8e0-605f-4e05-89a2-042ba85ba53d" containerName="gather" Dec 05 20:12:00 crc kubenswrapper[4828]: E1205 20:12:00.314949 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="859af17e-b617-46f1-96a9-e819f88c632f" containerName="registry-server" Dec 05 20:12:00 crc kubenswrapper[4828]: I1205 20:12:00.314957 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="859af17e-b617-46f1-96a9-e819f88c632f" containerName="registry-server" Dec 05 20:12:00 crc kubenswrapper[4828]: E1205 20:12:00.314978 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1583d0bb-137e-4198-9cb3-2ff53e72cad5" containerName="container-00" Dec 05 20:12:00 crc kubenswrapper[4828]: I1205 20:12:00.314987 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="1583d0bb-137e-4198-9cb3-2ff53e72cad5" containerName="container-00" Dec 05 20:12:00 crc kubenswrapper[4828]: I1205 20:12:00.315237 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="859af17e-b617-46f1-96a9-e819f88c632f" containerName="registry-server" Dec 05 20:12:00 crc kubenswrapper[4828]: I1205 20:12:00.315284 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="1583d0bb-137e-4198-9cb3-2ff53e72cad5" containerName="container-00" Dec 05 20:12:00 crc kubenswrapper[4828]: I1205 20:12:00.315295 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd62c8e0-605f-4e05-89a2-042ba85ba53d" containerName="gather" Dec 05 20:12:00 crc kubenswrapper[4828]: I1205 20:12:00.315319 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd62c8e0-605f-4e05-89a2-042ba85ba53d" containerName="copy" Dec 05 20:12:00 crc kubenswrapper[4828]: I1205 20:12:00.317244 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4r6xv" Dec 05 20:12:00 crc kubenswrapper[4828]: I1205 20:12:00.325397 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4r6xv"] Dec 05 20:12:00 crc kubenswrapper[4828]: I1205 20:12:00.410325 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7bf04e8-367d-441e-a43e-c9db6e93e4c3-utilities\") pod \"certified-operators-4r6xv\" (UID: \"a7bf04e8-367d-441e-a43e-c9db6e93e4c3\") " pod="openshift-marketplace/certified-operators-4r6xv" Dec 05 20:12:00 crc kubenswrapper[4828]: I1205 20:12:00.410383 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbg9h\" (UniqueName: \"kubernetes.io/projected/a7bf04e8-367d-441e-a43e-c9db6e93e4c3-kube-api-access-sbg9h\") pod \"certified-operators-4r6xv\" (UID: \"a7bf04e8-367d-441e-a43e-c9db6e93e4c3\") " pod="openshift-marketplace/certified-operators-4r6xv" Dec 05 20:12:00 crc kubenswrapper[4828]: I1205 20:12:00.410619 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7bf04e8-367d-441e-a43e-c9db6e93e4c3-catalog-content\") pod \"certified-operators-4r6xv\" (UID: \"a7bf04e8-367d-441e-a43e-c9db6e93e4c3\") " pod="openshift-marketplace/certified-operators-4r6xv" Dec 05 20:12:00 crc kubenswrapper[4828]: I1205 20:12:00.512009 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7bf04e8-367d-441e-a43e-c9db6e93e4c3-utilities\") pod \"certified-operators-4r6xv\" (UID: \"a7bf04e8-367d-441e-a43e-c9db6e93e4c3\") " pod="openshift-marketplace/certified-operators-4r6xv" Dec 05 20:12:00 crc kubenswrapper[4828]: I1205 20:12:00.512060 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbg9h\" (UniqueName: \"kubernetes.io/projected/a7bf04e8-367d-441e-a43e-c9db6e93e4c3-kube-api-access-sbg9h\") pod \"certified-operators-4r6xv\" (UID: \"a7bf04e8-367d-441e-a43e-c9db6e93e4c3\") " pod="openshift-marketplace/certified-operators-4r6xv" Dec 05 20:12:00 crc kubenswrapper[4828]: I1205 20:12:00.512121 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7bf04e8-367d-441e-a43e-c9db6e93e4c3-catalog-content\") pod \"certified-operators-4r6xv\" (UID: \"a7bf04e8-367d-441e-a43e-c9db6e93e4c3\") " pod="openshift-marketplace/certified-operators-4r6xv" Dec 05 20:12:00 crc kubenswrapper[4828]: I1205 20:12:00.512644 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7bf04e8-367d-441e-a43e-c9db6e93e4c3-catalog-content\") pod \"certified-operators-4r6xv\" (UID: \"a7bf04e8-367d-441e-a43e-c9db6e93e4c3\") " pod="openshift-marketplace/certified-operators-4r6xv" Dec 05 20:12:00 crc kubenswrapper[4828]: I1205 20:12:00.512774 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7bf04e8-367d-441e-a43e-c9db6e93e4c3-utilities\") pod \"certified-operators-4r6xv\" (UID: \"a7bf04e8-367d-441e-a43e-c9db6e93e4c3\") " pod="openshift-marketplace/certified-operators-4r6xv" Dec 05 20:12:00 crc kubenswrapper[4828]: I1205 20:12:00.538210 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbg9h\" (UniqueName: \"kubernetes.io/projected/a7bf04e8-367d-441e-a43e-c9db6e93e4c3-kube-api-access-sbg9h\") pod \"certified-operators-4r6xv\" (UID: \"a7bf04e8-367d-441e-a43e-c9db6e93e4c3\") " pod="openshift-marketplace/certified-operators-4r6xv" Dec 05 20:12:00 crc kubenswrapper[4828]: I1205 20:12:00.641962 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4r6xv" Dec 05 20:12:01 crc kubenswrapper[4828]: I1205 20:12:01.134176 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4r6xv"] Dec 05 20:12:02 crc kubenswrapper[4828]: I1205 20:12:02.406264 4828 generic.go:334] "Generic (PLEG): container finished" podID="a7bf04e8-367d-441e-a43e-c9db6e93e4c3" containerID="594970ef938ba2d00c0a5455f67a9c3d2b3f8dca5683ab3ea6abdc925fd96ed2" exitCode=0 Dec 05 20:12:02 crc kubenswrapper[4828]: I1205 20:12:02.406318 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4r6xv" event={"ID":"a7bf04e8-367d-441e-a43e-c9db6e93e4c3","Type":"ContainerDied","Data":"594970ef938ba2d00c0a5455f67a9c3d2b3f8dca5683ab3ea6abdc925fd96ed2"} Dec 05 20:12:02 crc kubenswrapper[4828]: I1205 20:12:02.406756 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4r6xv" event={"ID":"a7bf04e8-367d-441e-a43e-c9db6e93e4c3","Type":"ContainerStarted","Data":"ae91e55707796fd45ba0378f39fde13c8921b27f7d69a1a297c823098eceb32c"} Dec 05 20:12:02 crc kubenswrapper[4828]: I1205 20:12:02.411396 4828 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 20:12:02 crc kubenswrapper[4828]: I1205 20:12:02.452123 4828 scope.go:117] "RemoveContainer" containerID="49b68706ff6cb133e98a8e73e383231ef31d28f2048427edba9ba321e824e434" Dec 05 20:12:02 crc kubenswrapper[4828]: E1205 20:12:02.452391 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:12:04 crc kubenswrapper[4828]: I1205 20:12:04.422796 4828 generic.go:334] "Generic (PLEG): container finished" podID="a7bf04e8-367d-441e-a43e-c9db6e93e4c3" containerID="488d2e050bdd14654a113ad576c7208d5160c4c7a5e85ea2bb041fa4a0d850f5" exitCode=0 Dec 05 20:12:04 crc kubenswrapper[4828]: I1205 20:12:04.422878 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4r6xv" event={"ID":"a7bf04e8-367d-441e-a43e-c9db6e93e4c3","Type":"ContainerDied","Data":"488d2e050bdd14654a113ad576c7208d5160c4c7a5e85ea2bb041fa4a0d850f5"} Dec 05 20:12:05 crc kubenswrapper[4828]: I1205 20:12:05.260119 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 20:12:05 crc kubenswrapper[4828]: I1205 20:12:05.260488 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 20:12:05 crc kubenswrapper[4828]: I1205 20:12:05.436756 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4r6xv" event={"ID":"a7bf04e8-367d-441e-a43e-c9db6e93e4c3","Type":"ContainerStarted","Data":"5e1160bdb6d179b4e1ccdedf1aaca47b1eda9e2c2c0b8040563733c66f80ed08"} Dec 05 20:12:05 crc kubenswrapper[4828]: I1205 20:12:05.461973 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4r6xv" podStartSLOduration=2.9598766039999997 podStartE2EDuration="5.461947311s" podCreationTimestamp="2025-12-05 20:12:00 +0000 UTC" firstStartedPulling="2025-12-05 20:12:02.411155888 +0000 UTC m=+4100.306378194" lastFinishedPulling="2025-12-05 20:12:04.913226595 +0000 UTC m=+4102.808448901" observedRunningTime="2025-12-05 20:12:05.453986086 +0000 UTC m=+4103.349208442" watchObservedRunningTime="2025-12-05 20:12:05.461947311 +0000 UTC m=+4103.357169627" Dec 05 20:12:10 crc kubenswrapper[4828]: I1205 20:12:10.643059 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4r6xv" Dec 05 20:12:10 crc kubenswrapper[4828]: I1205 20:12:10.643540 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4r6xv" Dec 05 20:12:10 crc kubenswrapper[4828]: I1205 20:12:10.690938 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4r6xv" Dec 05 20:12:11 crc kubenswrapper[4828]: I1205 20:12:11.582896 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4r6xv" Dec 05 20:12:11 crc kubenswrapper[4828]: I1205 20:12:11.643680 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4r6xv"] Dec 05 20:12:13 crc kubenswrapper[4828]: I1205 20:12:13.535966 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4r6xv" podUID="a7bf04e8-367d-441e-a43e-c9db6e93e4c3" containerName="registry-server" containerID="cri-o://5e1160bdb6d179b4e1ccdedf1aaca47b1eda9e2c2c0b8040563733c66f80ed08" gracePeriod=2 Dec 05 20:12:14 crc kubenswrapper[4828]: I1205 20:12:14.132776 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4r6xv" Dec 05 20:12:14 crc kubenswrapper[4828]: I1205 20:12:14.291081 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbg9h\" (UniqueName: \"kubernetes.io/projected/a7bf04e8-367d-441e-a43e-c9db6e93e4c3-kube-api-access-sbg9h\") pod \"a7bf04e8-367d-441e-a43e-c9db6e93e4c3\" (UID: \"a7bf04e8-367d-441e-a43e-c9db6e93e4c3\") " Dec 05 20:12:14 crc kubenswrapper[4828]: I1205 20:12:14.291913 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7bf04e8-367d-441e-a43e-c9db6e93e4c3-catalog-content\") pod \"a7bf04e8-367d-441e-a43e-c9db6e93e4c3\" (UID: \"a7bf04e8-367d-441e-a43e-c9db6e93e4c3\") " Dec 05 20:12:14 crc kubenswrapper[4828]: I1205 20:12:14.292067 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7bf04e8-367d-441e-a43e-c9db6e93e4c3-utilities\") pod \"a7bf04e8-367d-441e-a43e-c9db6e93e4c3\" (UID: \"a7bf04e8-367d-441e-a43e-c9db6e93e4c3\") " Dec 05 20:12:14 crc kubenswrapper[4828]: I1205 20:12:14.295193 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7bf04e8-367d-441e-a43e-c9db6e93e4c3-utilities" (OuterVolumeSpecName: "utilities") pod "a7bf04e8-367d-441e-a43e-c9db6e93e4c3" (UID: "a7bf04e8-367d-441e-a43e-c9db6e93e4c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 20:12:14 crc kubenswrapper[4828]: I1205 20:12:14.298840 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7bf04e8-367d-441e-a43e-c9db6e93e4c3-kube-api-access-sbg9h" (OuterVolumeSpecName: "kube-api-access-sbg9h") pod "a7bf04e8-367d-441e-a43e-c9db6e93e4c3" (UID: "a7bf04e8-367d-441e-a43e-c9db6e93e4c3"). InnerVolumeSpecName "kube-api-access-sbg9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 20:12:14 crc kubenswrapper[4828]: I1205 20:12:14.350251 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7bf04e8-367d-441e-a43e-c9db6e93e4c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7bf04e8-367d-441e-a43e-c9db6e93e4c3" (UID: "a7bf04e8-367d-441e-a43e-c9db6e93e4c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 20:12:14 crc kubenswrapper[4828]: I1205 20:12:14.397203 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbg9h\" (UniqueName: \"kubernetes.io/projected/a7bf04e8-367d-441e-a43e-c9db6e93e4c3-kube-api-access-sbg9h\") on node \"crc\" DevicePath \"\"" Dec 05 20:12:14 crc kubenswrapper[4828]: I1205 20:12:14.397253 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7bf04e8-367d-441e-a43e-c9db6e93e4c3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 20:12:14 crc kubenswrapper[4828]: I1205 20:12:14.397263 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7bf04e8-367d-441e-a43e-c9db6e93e4c3-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 20:12:14 crc kubenswrapper[4828]: I1205 20:12:14.563524 4828 generic.go:334] "Generic (PLEG): container finished" podID="a7bf04e8-367d-441e-a43e-c9db6e93e4c3" containerID="5e1160bdb6d179b4e1ccdedf1aaca47b1eda9e2c2c0b8040563733c66f80ed08" exitCode=0 Dec 05 20:12:14 crc kubenswrapper[4828]: I1205 20:12:14.563574 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4r6xv" event={"ID":"a7bf04e8-367d-441e-a43e-c9db6e93e4c3","Type":"ContainerDied","Data":"5e1160bdb6d179b4e1ccdedf1aaca47b1eda9e2c2c0b8040563733c66f80ed08"} Dec 05 20:12:14 crc kubenswrapper[4828]: I1205 20:12:14.563602 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4r6xv" event={"ID":"a7bf04e8-367d-441e-a43e-c9db6e93e4c3","Type":"ContainerDied","Data":"ae91e55707796fd45ba0378f39fde13c8921b27f7d69a1a297c823098eceb32c"} Dec 05 20:12:14 crc kubenswrapper[4828]: I1205 20:12:14.563620 4828 scope.go:117] "RemoveContainer" containerID="5e1160bdb6d179b4e1ccdedf1aaca47b1eda9e2c2c0b8040563733c66f80ed08" Dec 05 20:12:14 crc kubenswrapper[4828]: I1205 20:12:14.563750 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4r6xv" Dec 05 20:12:14 crc kubenswrapper[4828]: I1205 20:12:14.591584 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4r6xv"] Dec 05 20:12:14 crc kubenswrapper[4828]: I1205 20:12:14.596388 4828 scope.go:117] "RemoveContainer" containerID="488d2e050bdd14654a113ad576c7208d5160c4c7a5e85ea2bb041fa4a0d850f5" Dec 05 20:12:14 crc kubenswrapper[4828]: I1205 20:12:14.613336 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4r6xv"] Dec 05 20:12:14 crc kubenswrapper[4828]: I1205 20:12:14.628248 4828 scope.go:117] "RemoveContainer" containerID="594970ef938ba2d00c0a5455f67a9c3d2b3f8dca5683ab3ea6abdc925fd96ed2" Dec 05 20:12:14 crc kubenswrapper[4828]: I1205 20:12:14.664532 4828 scope.go:117] "RemoveContainer" containerID="5e1160bdb6d179b4e1ccdedf1aaca47b1eda9e2c2c0b8040563733c66f80ed08" Dec 05 20:12:14 crc kubenswrapper[4828]: E1205 20:12:14.665073 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e1160bdb6d179b4e1ccdedf1aaca47b1eda9e2c2c0b8040563733c66f80ed08\": container with ID starting with 5e1160bdb6d179b4e1ccdedf1aaca47b1eda9e2c2c0b8040563733c66f80ed08 not found: ID does not exist" containerID="5e1160bdb6d179b4e1ccdedf1aaca47b1eda9e2c2c0b8040563733c66f80ed08" Dec 05 20:12:14 crc kubenswrapper[4828]: I1205 20:12:14.665119 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e1160bdb6d179b4e1ccdedf1aaca47b1eda9e2c2c0b8040563733c66f80ed08"} err="failed to get container status \"5e1160bdb6d179b4e1ccdedf1aaca47b1eda9e2c2c0b8040563733c66f80ed08\": rpc error: code = NotFound desc = could not find container \"5e1160bdb6d179b4e1ccdedf1aaca47b1eda9e2c2c0b8040563733c66f80ed08\": container with ID starting with 5e1160bdb6d179b4e1ccdedf1aaca47b1eda9e2c2c0b8040563733c66f80ed08 not found: ID does not exist" Dec 05 20:12:14 crc kubenswrapper[4828]: I1205 20:12:14.665147 4828 scope.go:117] "RemoveContainer" containerID="488d2e050bdd14654a113ad576c7208d5160c4c7a5e85ea2bb041fa4a0d850f5" Dec 05 20:12:14 crc kubenswrapper[4828]: E1205 20:12:14.665566 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"488d2e050bdd14654a113ad576c7208d5160c4c7a5e85ea2bb041fa4a0d850f5\": container with ID starting with 488d2e050bdd14654a113ad576c7208d5160c4c7a5e85ea2bb041fa4a0d850f5 not found: ID does not exist" containerID="488d2e050bdd14654a113ad576c7208d5160c4c7a5e85ea2bb041fa4a0d850f5" Dec 05 20:12:14 crc kubenswrapper[4828]: I1205 20:12:14.665599 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"488d2e050bdd14654a113ad576c7208d5160c4c7a5e85ea2bb041fa4a0d850f5"} err="failed to get container status \"488d2e050bdd14654a113ad576c7208d5160c4c7a5e85ea2bb041fa4a0d850f5\": rpc error: code = NotFound desc = could not find container \"488d2e050bdd14654a113ad576c7208d5160c4c7a5e85ea2bb041fa4a0d850f5\": container with ID starting with 488d2e050bdd14654a113ad576c7208d5160c4c7a5e85ea2bb041fa4a0d850f5 not found: ID does not exist" Dec 05 20:12:14 crc kubenswrapper[4828]: I1205 20:12:14.665617 4828 scope.go:117] "RemoveContainer" containerID="594970ef938ba2d00c0a5455f67a9c3d2b3f8dca5683ab3ea6abdc925fd96ed2" Dec 05 20:12:14 crc kubenswrapper[4828]: E1205 20:12:14.666101 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"594970ef938ba2d00c0a5455f67a9c3d2b3f8dca5683ab3ea6abdc925fd96ed2\": container with ID starting with 594970ef938ba2d00c0a5455f67a9c3d2b3f8dca5683ab3ea6abdc925fd96ed2 not found: ID does not exist" containerID="594970ef938ba2d00c0a5455f67a9c3d2b3f8dca5683ab3ea6abdc925fd96ed2" Dec 05 20:12:14 crc kubenswrapper[4828]: I1205 20:12:14.666135 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"594970ef938ba2d00c0a5455f67a9c3d2b3f8dca5683ab3ea6abdc925fd96ed2"} err="failed to get container status \"594970ef938ba2d00c0a5455f67a9c3d2b3f8dca5683ab3ea6abdc925fd96ed2\": rpc error: code = NotFound desc = could not find container \"594970ef938ba2d00c0a5455f67a9c3d2b3f8dca5683ab3ea6abdc925fd96ed2\": container with ID starting with 594970ef938ba2d00c0a5455f67a9c3d2b3f8dca5683ab3ea6abdc925fd96ed2 not found: ID does not exist" Dec 05 20:12:16 crc kubenswrapper[4828]: I1205 20:12:16.447350 4828 scope.go:117] "RemoveContainer" containerID="49b68706ff6cb133e98a8e73e383231ef31d28f2048427edba9ba321e824e434" Dec 05 20:12:16 crc kubenswrapper[4828]: E1205 20:12:16.447879 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:12:16 crc kubenswrapper[4828]: I1205 20:12:16.468227 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7bf04e8-367d-441e-a43e-c9db6e93e4c3" path="/var/lib/kubelet/pods/a7bf04e8-367d-441e-a43e-c9db6e93e4c3/volumes" Dec 05 20:12:28 crc kubenswrapper[4828]: I1205 20:12:28.447430 4828 scope.go:117] "RemoveContainer" containerID="49b68706ff6cb133e98a8e73e383231ef31d28f2048427edba9ba321e824e434" Dec 05 20:12:28 crc kubenswrapper[4828]: I1205 20:12:28.728705 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" event={"ID":"03c4fc5d-6be1-47b4-9c39-7bb86046dafd","Type":"ContainerStarted","Data":"669b952c3ede85f0975d95c6d6a646122e5087b1c3258c23ed6e56f0390472d9"} Dec 05 20:12:28 crc kubenswrapper[4828]: I1205 20:12:28.729911 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 20:12:35 crc kubenswrapper[4828]: I1205 20:12:35.129336 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 20:12:35 crc kubenswrapper[4828]: I1205 20:12:35.249721 4828 scope.go:117] "RemoveContainer" containerID="8284089aebfb9042cf6aae51a8f81b800e28d2df860d04d0cb5651e3ae200c13" Dec 05 20:12:35 crc kubenswrapper[4828]: I1205 20:12:35.259582 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 20:12:35 crc kubenswrapper[4828]: I1205 20:12:35.259896 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 20:12:35 crc kubenswrapper[4828]: I1205 20:12:35.259935 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" Dec 05 20:12:35 crc kubenswrapper[4828]: I1205 20:12:35.260895 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d68143ac91ffa442f6544d88d704ba189350bba12267b53e7e8314035ce28693"} pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 20:12:35 crc kubenswrapper[4828]: I1205 20:12:35.260949 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" containerID="cri-o://d68143ac91ffa442f6544d88d704ba189350bba12267b53e7e8314035ce28693" gracePeriod=600 Dec 05 20:12:35 crc kubenswrapper[4828]: I1205 20:12:35.277870 4828 scope.go:117] "RemoveContainer" containerID="6af36ae01c9a75df0559af7f0d56ac6eb776a1a0e61db8773d285760a4f21c6c" Dec 05 20:12:35 crc kubenswrapper[4828]: I1205 20:12:35.815502 4828 generic.go:334] "Generic (PLEG): container finished" podID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerID="d68143ac91ffa442f6544d88d704ba189350bba12267b53e7e8314035ce28693" exitCode=0 Dec 05 20:12:35 crc kubenswrapper[4828]: I1205 20:12:35.815589 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerDied","Data":"d68143ac91ffa442f6544d88d704ba189350bba12267b53e7e8314035ce28693"} Dec 05 20:12:35 crc kubenswrapper[4828]: I1205 20:12:35.815937 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerStarted","Data":"09ef2c76d82d57f95dccc1440fbb7d812fad2288311e423396f4916955e3011c"} Dec 05 20:12:35 crc kubenswrapper[4828]: I1205 20:12:35.815980 4828 scope.go:117] "RemoveContainer" containerID="da6cba4d17ed1a9ad4e24e0406b8358b98cd50e0a707710e4db622166b616e5f" Dec 05 20:13:47 crc kubenswrapper[4828]: I1205 20:13:47.048712 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wgdgv/must-gather-r494b"] Dec 05 20:13:47 crc kubenswrapper[4828]: E1205 20:13:47.049666 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7bf04e8-367d-441e-a43e-c9db6e93e4c3" containerName="extract-content" Dec 05 20:13:47 crc kubenswrapper[4828]: I1205 20:13:47.049683 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7bf04e8-367d-441e-a43e-c9db6e93e4c3" containerName="extract-content" Dec 05 20:13:47 crc kubenswrapper[4828]: E1205 20:13:47.049709 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7bf04e8-367d-441e-a43e-c9db6e93e4c3" containerName="registry-server" Dec 05 20:13:47 crc kubenswrapper[4828]: I1205 20:13:47.049717 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7bf04e8-367d-441e-a43e-c9db6e93e4c3" containerName="registry-server" Dec 05 20:13:47 crc kubenswrapper[4828]: E1205 20:13:47.049745 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7bf04e8-367d-441e-a43e-c9db6e93e4c3" containerName="extract-utilities" Dec 05 20:13:47 crc kubenswrapper[4828]: I1205 20:13:47.049753 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7bf04e8-367d-441e-a43e-c9db6e93e4c3" containerName="extract-utilities" Dec 05 20:13:47 crc kubenswrapper[4828]: I1205 20:13:47.049957 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7bf04e8-367d-441e-a43e-c9db6e93e4c3" containerName="registry-server" Dec 05 20:13:47 crc kubenswrapper[4828]: I1205 20:13:47.051134 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wgdgv/must-gather-r494b" Dec 05 20:13:47 crc kubenswrapper[4828]: I1205 20:13:47.053892 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-wgdgv"/"default-dockercfg-stcsl" Dec 05 20:13:47 crc kubenswrapper[4828]: I1205 20:13:47.054137 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-wgdgv"/"openshift-service-ca.crt" Dec 05 20:13:47 crc kubenswrapper[4828]: I1205 20:13:47.066368 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-wgdgv"/"kube-root-ca.crt" Dec 05 20:13:47 crc kubenswrapper[4828]: I1205 20:13:47.078161 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wgdgv/must-gather-r494b"] Dec 05 20:13:47 crc kubenswrapper[4828]: I1205 20:13:47.156117 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6f76f688-de64-4b0a-b32d-2a56bfd43fd9-must-gather-output\") pod \"must-gather-r494b\" (UID: \"6f76f688-de64-4b0a-b32d-2a56bfd43fd9\") " pod="openshift-must-gather-wgdgv/must-gather-r494b" Dec 05 20:13:47 crc kubenswrapper[4828]: I1205 20:13:47.156216 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlsjc\" (UniqueName: \"kubernetes.io/projected/6f76f688-de64-4b0a-b32d-2a56bfd43fd9-kube-api-access-tlsjc\") pod \"must-gather-r494b\" (UID: \"6f76f688-de64-4b0a-b32d-2a56bfd43fd9\") " pod="openshift-must-gather-wgdgv/must-gather-r494b" Dec 05 20:13:47 crc kubenswrapper[4828]: I1205 20:13:47.258366 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6f76f688-de64-4b0a-b32d-2a56bfd43fd9-must-gather-output\") pod \"must-gather-r494b\" (UID: \"6f76f688-de64-4b0a-b32d-2a56bfd43fd9\") " pod="openshift-must-gather-wgdgv/must-gather-r494b" Dec 05 20:13:47 crc kubenswrapper[4828]: I1205 20:13:47.258426 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlsjc\" (UniqueName: \"kubernetes.io/projected/6f76f688-de64-4b0a-b32d-2a56bfd43fd9-kube-api-access-tlsjc\") pod \"must-gather-r494b\" (UID: \"6f76f688-de64-4b0a-b32d-2a56bfd43fd9\") " pod="openshift-must-gather-wgdgv/must-gather-r494b" Dec 05 20:13:47 crc kubenswrapper[4828]: I1205 20:13:47.258870 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6f76f688-de64-4b0a-b32d-2a56bfd43fd9-must-gather-output\") pod \"must-gather-r494b\" (UID: \"6f76f688-de64-4b0a-b32d-2a56bfd43fd9\") " pod="openshift-must-gather-wgdgv/must-gather-r494b" Dec 05 20:13:47 crc kubenswrapper[4828]: I1205 20:13:47.276621 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlsjc\" (UniqueName: \"kubernetes.io/projected/6f76f688-de64-4b0a-b32d-2a56bfd43fd9-kube-api-access-tlsjc\") pod \"must-gather-r494b\" (UID: \"6f76f688-de64-4b0a-b32d-2a56bfd43fd9\") " pod="openshift-must-gather-wgdgv/must-gather-r494b" Dec 05 20:13:47 crc kubenswrapper[4828]: I1205 20:13:47.379388 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wgdgv/must-gather-r494b" Dec 05 20:13:47 crc kubenswrapper[4828]: I1205 20:13:47.886534 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wgdgv/must-gather-r494b"] Dec 05 20:13:48 crc kubenswrapper[4828]: I1205 20:13:48.612132 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wgdgv/must-gather-r494b" event={"ID":"6f76f688-de64-4b0a-b32d-2a56bfd43fd9","Type":"ContainerStarted","Data":"9a09ce9525d53eda4a7b3d2ffc66379f3b2897eeda84dbd6d6780cff1a8c4bc5"} Dec 05 20:13:48 crc kubenswrapper[4828]: I1205 20:13:48.612436 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wgdgv/must-gather-r494b" event={"ID":"6f76f688-de64-4b0a-b32d-2a56bfd43fd9","Type":"ContainerStarted","Data":"20b3891944e57cde92a7931a9d6a5d114121fa973a788188dde02fb1107055c3"} Dec 05 20:13:48 crc kubenswrapper[4828]: I1205 20:13:48.612450 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wgdgv/must-gather-r494b" event={"ID":"6f76f688-de64-4b0a-b32d-2a56bfd43fd9","Type":"ContainerStarted","Data":"5d55920f48039e188443dc834e8683e89d19e2cd55c662136710426508438b6d"} Dec 05 20:13:48 crc kubenswrapper[4828]: I1205 20:13:48.630462 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wgdgv/must-gather-r494b" podStartSLOduration=1.630443596 podStartE2EDuration="1.630443596s" podCreationTimestamp="2025-12-05 20:13:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 20:13:48.626064428 +0000 UTC m=+4206.521286754" watchObservedRunningTime="2025-12-05 20:13:48.630443596 +0000 UTC m=+4206.525665902" Dec 05 20:13:51 crc kubenswrapper[4828]: I1205 20:13:51.829301 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wgdgv/crc-debug-vhlmt"] Dec 05 20:13:51 crc kubenswrapper[4828]: I1205 20:13:51.830903 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wgdgv/crc-debug-vhlmt" Dec 05 20:13:51 crc kubenswrapper[4828]: I1205 20:13:51.949543 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzw5m\" (UniqueName: \"kubernetes.io/projected/ee402526-ceea-42c8-a7ae-a0e4fa72e83d-kube-api-access-mzw5m\") pod \"crc-debug-vhlmt\" (UID: \"ee402526-ceea-42c8-a7ae-a0e4fa72e83d\") " pod="openshift-must-gather-wgdgv/crc-debug-vhlmt" Dec 05 20:13:51 crc kubenswrapper[4828]: I1205 20:13:51.950174 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ee402526-ceea-42c8-a7ae-a0e4fa72e83d-host\") pod \"crc-debug-vhlmt\" (UID: \"ee402526-ceea-42c8-a7ae-a0e4fa72e83d\") " pod="openshift-must-gather-wgdgv/crc-debug-vhlmt" Dec 05 20:13:52 crc kubenswrapper[4828]: I1205 20:13:52.052388 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ee402526-ceea-42c8-a7ae-a0e4fa72e83d-host\") pod \"crc-debug-vhlmt\" (UID: \"ee402526-ceea-42c8-a7ae-a0e4fa72e83d\") " pod="openshift-must-gather-wgdgv/crc-debug-vhlmt" Dec 05 20:13:52 crc kubenswrapper[4828]: I1205 20:13:52.052678 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ee402526-ceea-42c8-a7ae-a0e4fa72e83d-host\") pod \"crc-debug-vhlmt\" (UID: \"ee402526-ceea-42c8-a7ae-a0e4fa72e83d\") " pod="openshift-must-gather-wgdgv/crc-debug-vhlmt" Dec 05 20:13:52 crc kubenswrapper[4828]: I1205 20:13:52.052696 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzw5m\" (UniqueName: \"kubernetes.io/projected/ee402526-ceea-42c8-a7ae-a0e4fa72e83d-kube-api-access-mzw5m\") pod \"crc-debug-vhlmt\" (UID: \"ee402526-ceea-42c8-a7ae-a0e4fa72e83d\") " pod="openshift-must-gather-wgdgv/crc-debug-vhlmt" Dec 05 20:13:52 crc kubenswrapper[4828]: I1205 20:13:52.082627 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzw5m\" (UniqueName: \"kubernetes.io/projected/ee402526-ceea-42c8-a7ae-a0e4fa72e83d-kube-api-access-mzw5m\") pod \"crc-debug-vhlmt\" (UID: \"ee402526-ceea-42c8-a7ae-a0e4fa72e83d\") " pod="openshift-must-gather-wgdgv/crc-debug-vhlmt" Dec 05 20:13:52 crc kubenswrapper[4828]: I1205 20:13:52.150600 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wgdgv/crc-debug-vhlmt" Dec 05 20:13:52 crc kubenswrapper[4828]: W1205 20:13:52.231759 4828 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee402526_ceea_42c8_a7ae_a0e4fa72e83d.slice/crio-96cefb6077ea6f2d02c85b99cd6cf6b362a45072e321c354cefcc280b7a88d3a WatchSource:0}: Error finding container 96cefb6077ea6f2d02c85b99cd6cf6b362a45072e321c354cefcc280b7a88d3a: Status 404 returned error can't find the container with id 96cefb6077ea6f2d02c85b99cd6cf6b362a45072e321c354cefcc280b7a88d3a Dec 05 20:13:52 crc kubenswrapper[4828]: I1205 20:13:52.651163 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wgdgv/crc-debug-vhlmt" event={"ID":"ee402526-ceea-42c8-a7ae-a0e4fa72e83d","Type":"ContainerStarted","Data":"8c0031e797a7b628a80dec7106ee796d13906e381ce59d60a11b316829235d9d"} Dec 05 20:13:52 crc kubenswrapper[4828]: I1205 20:13:52.651596 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wgdgv/crc-debug-vhlmt" event={"ID":"ee402526-ceea-42c8-a7ae-a0e4fa72e83d","Type":"ContainerStarted","Data":"96cefb6077ea6f2d02c85b99cd6cf6b362a45072e321c354cefcc280b7a88d3a"} Dec 05 20:13:52 crc kubenswrapper[4828]: I1205 20:13:52.671054 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wgdgv/crc-debug-vhlmt" podStartSLOduration=1.671033312 podStartE2EDuration="1.671033312s" podCreationTimestamp="2025-12-05 20:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 20:13:52.66650575 +0000 UTC m=+4210.561728066" watchObservedRunningTime="2025-12-05 20:13:52.671033312 +0000 UTC m=+4210.566255618" Dec 05 20:14:25 crc kubenswrapper[4828]: I1205 20:14:25.950916 4828 generic.go:334] "Generic (PLEG): container finished" podID="ee402526-ceea-42c8-a7ae-a0e4fa72e83d" containerID="8c0031e797a7b628a80dec7106ee796d13906e381ce59d60a11b316829235d9d" exitCode=0 Dec 05 20:14:25 crc kubenswrapper[4828]: I1205 20:14:25.951311 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wgdgv/crc-debug-vhlmt" event={"ID":"ee402526-ceea-42c8-a7ae-a0e4fa72e83d","Type":"ContainerDied","Data":"8c0031e797a7b628a80dec7106ee796d13906e381ce59d60a11b316829235d9d"} Dec 05 20:14:27 crc kubenswrapper[4828]: I1205 20:14:27.070287 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wgdgv/crc-debug-vhlmt" Dec 05 20:14:27 crc kubenswrapper[4828]: I1205 20:14:27.102380 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wgdgv/crc-debug-vhlmt"] Dec 05 20:14:27 crc kubenswrapper[4828]: I1205 20:14:27.110092 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wgdgv/crc-debug-vhlmt"] Dec 05 20:14:27 crc kubenswrapper[4828]: I1205 20:14:27.169516 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzw5m\" (UniqueName: \"kubernetes.io/projected/ee402526-ceea-42c8-a7ae-a0e4fa72e83d-kube-api-access-mzw5m\") pod \"ee402526-ceea-42c8-a7ae-a0e4fa72e83d\" (UID: \"ee402526-ceea-42c8-a7ae-a0e4fa72e83d\") " Dec 05 20:14:27 crc kubenswrapper[4828]: I1205 20:14:27.169646 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ee402526-ceea-42c8-a7ae-a0e4fa72e83d-host\") pod \"ee402526-ceea-42c8-a7ae-a0e4fa72e83d\" (UID: \"ee402526-ceea-42c8-a7ae-a0e4fa72e83d\") " Dec 05 20:14:27 crc kubenswrapper[4828]: I1205 20:14:27.169782 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee402526-ceea-42c8-a7ae-a0e4fa72e83d-host" (OuterVolumeSpecName: "host") pod "ee402526-ceea-42c8-a7ae-a0e4fa72e83d" (UID: "ee402526-ceea-42c8-a7ae-a0e4fa72e83d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 20:14:27 crc kubenswrapper[4828]: I1205 20:14:27.170231 4828 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ee402526-ceea-42c8-a7ae-a0e4fa72e83d-host\") on node \"crc\" DevicePath \"\"" Dec 05 20:14:27 crc kubenswrapper[4828]: I1205 20:14:27.175083 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee402526-ceea-42c8-a7ae-a0e4fa72e83d-kube-api-access-mzw5m" (OuterVolumeSpecName: "kube-api-access-mzw5m") pod "ee402526-ceea-42c8-a7ae-a0e4fa72e83d" (UID: "ee402526-ceea-42c8-a7ae-a0e4fa72e83d"). InnerVolumeSpecName "kube-api-access-mzw5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 20:14:27 crc kubenswrapper[4828]: I1205 20:14:27.273143 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzw5m\" (UniqueName: \"kubernetes.io/projected/ee402526-ceea-42c8-a7ae-a0e4fa72e83d-kube-api-access-mzw5m\") on node \"crc\" DevicePath \"\"" Dec 05 20:14:27 crc kubenswrapper[4828]: I1205 20:14:27.979038 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96cefb6077ea6f2d02c85b99cd6cf6b362a45072e321c354cefcc280b7a88d3a" Dec 05 20:14:27 crc kubenswrapper[4828]: I1205 20:14:27.979089 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wgdgv/crc-debug-vhlmt" Dec 05 20:14:28 crc kubenswrapper[4828]: I1205 20:14:28.280180 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wgdgv/crc-debug-zljg9"] Dec 05 20:14:28 crc kubenswrapper[4828]: E1205 20:14:28.280611 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee402526-ceea-42c8-a7ae-a0e4fa72e83d" containerName="container-00" Dec 05 20:14:28 crc kubenswrapper[4828]: I1205 20:14:28.280622 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee402526-ceea-42c8-a7ae-a0e4fa72e83d" containerName="container-00" Dec 05 20:14:28 crc kubenswrapper[4828]: I1205 20:14:28.280795 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee402526-ceea-42c8-a7ae-a0e4fa72e83d" containerName="container-00" Dec 05 20:14:28 crc kubenswrapper[4828]: I1205 20:14:28.281506 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wgdgv/crc-debug-zljg9" Dec 05 20:14:28 crc kubenswrapper[4828]: I1205 20:14:28.392036 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aa667065-121d-4a4a-972b-7a8a311a9edb-host\") pod \"crc-debug-zljg9\" (UID: \"aa667065-121d-4a4a-972b-7a8a311a9edb\") " pod="openshift-must-gather-wgdgv/crc-debug-zljg9" Dec 05 20:14:28 crc kubenswrapper[4828]: I1205 20:14:28.392363 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdkhb\" (UniqueName: \"kubernetes.io/projected/aa667065-121d-4a4a-972b-7a8a311a9edb-kube-api-access-fdkhb\") pod \"crc-debug-zljg9\" (UID: \"aa667065-121d-4a4a-972b-7a8a311a9edb\") " pod="openshift-must-gather-wgdgv/crc-debug-zljg9" Dec 05 20:14:28 crc kubenswrapper[4828]: I1205 20:14:28.456563 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee402526-ceea-42c8-a7ae-a0e4fa72e83d" path="/var/lib/kubelet/pods/ee402526-ceea-42c8-a7ae-a0e4fa72e83d/volumes" Dec 05 20:14:28 crc kubenswrapper[4828]: I1205 20:14:28.494769 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdkhb\" (UniqueName: \"kubernetes.io/projected/aa667065-121d-4a4a-972b-7a8a311a9edb-kube-api-access-fdkhb\") pod \"crc-debug-zljg9\" (UID: \"aa667065-121d-4a4a-972b-7a8a311a9edb\") " pod="openshift-must-gather-wgdgv/crc-debug-zljg9" Dec 05 20:14:28 crc kubenswrapper[4828]: I1205 20:14:28.494932 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aa667065-121d-4a4a-972b-7a8a311a9edb-host\") pod \"crc-debug-zljg9\" (UID: \"aa667065-121d-4a4a-972b-7a8a311a9edb\") " pod="openshift-must-gather-wgdgv/crc-debug-zljg9" Dec 05 20:14:28 crc kubenswrapper[4828]: I1205 20:14:28.495071 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aa667065-121d-4a4a-972b-7a8a311a9edb-host\") pod \"crc-debug-zljg9\" (UID: \"aa667065-121d-4a4a-972b-7a8a311a9edb\") " pod="openshift-must-gather-wgdgv/crc-debug-zljg9" Dec 05 20:14:28 crc kubenswrapper[4828]: I1205 20:14:28.514851 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdkhb\" (UniqueName: \"kubernetes.io/projected/aa667065-121d-4a4a-972b-7a8a311a9edb-kube-api-access-fdkhb\") pod \"crc-debug-zljg9\" (UID: \"aa667065-121d-4a4a-972b-7a8a311a9edb\") " pod="openshift-must-gather-wgdgv/crc-debug-zljg9" Dec 05 20:14:28 crc kubenswrapper[4828]: I1205 20:14:28.598860 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wgdgv/crc-debug-zljg9" Dec 05 20:14:28 crc kubenswrapper[4828]: I1205 20:14:28.990288 4828 generic.go:334] "Generic (PLEG): container finished" podID="aa667065-121d-4a4a-972b-7a8a311a9edb" containerID="3f7aa654c4a6aed63ac2c89298c55871ec5c6da782e1ac1214b7fc4403cea147" exitCode=0 Dec 05 20:14:28 crc kubenswrapper[4828]: I1205 20:14:28.990364 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wgdgv/crc-debug-zljg9" event={"ID":"aa667065-121d-4a4a-972b-7a8a311a9edb","Type":"ContainerDied","Data":"3f7aa654c4a6aed63ac2c89298c55871ec5c6da782e1ac1214b7fc4403cea147"} Dec 05 20:14:28 crc kubenswrapper[4828]: I1205 20:14:28.990641 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wgdgv/crc-debug-zljg9" event={"ID":"aa667065-121d-4a4a-972b-7a8a311a9edb","Type":"ContainerStarted","Data":"54640204000a2a7b2ec170eba826bb65f74aac53253c66933e743c120ca065dd"} Dec 05 20:14:29 crc kubenswrapper[4828]: I1205 20:14:29.468348 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wgdgv/crc-debug-zljg9"] Dec 05 20:14:29 crc kubenswrapper[4828]: I1205 20:14:29.495272 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wgdgv/crc-debug-zljg9"] Dec 05 20:14:30 crc kubenswrapper[4828]: I1205 20:14:30.093436 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wgdgv/crc-debug-zljg9" Dec 05 20:14:30 crc kubenswrapper[4828]: I1205 20:14:30.225985 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aa667065-121d-4a4a-972b-7a8a311a9edb-host\") pod \"aa667065-121d-4a4a-972b-7a8a311a9edb\" (UID: \"aa667065-121d-4a4a-972b-7a8a311a9edb\") " Dec 05 20:14:30 crc kubenswrapper[4828]: I1205 20:14:30.226397 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdkhb\" (UniqueName: \"kubernetes.io/projected/aa667065-121d-4a4a-972b-7a8a311a9edb-kube-api-access-fdkhb\") pod \"aa667065-121d-4a4a-972b-7a8a311a9edb\" (UID: \"aa667065-121d-4a4a-972b-7a8a311a9edb\") " Dec 05 20:14:30 crc kubenswrapper[4828]: I1205 20:14:30.226121 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa667065-121d-4a4a-972b-7a8a311a9edb-host" (OuterVolumeSpecName: "host") pod "aa667065-121d-4a4a-972b-7a8a311a9edb" (UID: "aa667065-121d-4a4a-972b-7a8a311a9edb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 20:14:30 crc kubenswrapper[4828]: I1205 20:14:30.234561 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa667065-121d-4a4a-972b-7a8a311a9edb-kube-api-access-fdkhb" (OuterVolumeSpecName: "kube-api-access-fdkhb") pod "aa667065-121d-4a4a-972b-7a8a311a9edb" (UID: "aa667065-121d-4a4a-972b-7a8a311a9edb"). InnerVolumeSpecName "kube-api-access-fdkhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 20:14:30 crc kubenswrapper[4828]: I1205 20:14:30.328483 4828 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aa667065-121d-4a4a-972b-7a8a311a9edb-host\") on node \"crc\" DevicePath \"\"" Dec 05 20:14:30 crc kubenswrapper[4828]: I1205 20:14:30.328524 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdkhb\" (UniqueName: \"kubernetes.io/projected/aa667065-121d-4a4a-972b-7a8a311a9edb-kube-api-access-fdkhb\") on node \"crc\" DevicePath \"\"" Dec 05 20:14:30 crc kubenswrapper[4828]: I1205 20:14:30.456426 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa667065-121d-4a4a-972b-7a8a311a9edb" path="/var/lib/kubelet/pods/aa667065-121d-4a4a-972b-7a8a311a9edb/volumes" Dec 05 20:14:30 crc kubenswrapper[4828]: I1205 20:14:30.659403 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wgdgv/crc-debug-5nk9k"] Dec 05 20:14:30 crc kubenswrapper[4828]: E1205 20:14:30.659850 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa667065-121d-4a4a-972b-7a8a311a9edb" containerName="container-00" Dec 05 20:14:30 crc kubenswrapper[4828]: I1205 20:14:30.659866 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa667065-121d-4a4a-972b-7a8a311a9edb" containerName="container-00" Dec 05 20:14:30 crc kubenswrapper[4828]: I1205 20:14:30.660036 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa667065-121d-4a4a-972b-7a8a311a9edb" containerName="container-00" Dec 05 20:14:30 crc kubenswrapper[4828]: I1205 20:14:30.660637 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wgdgv/crc-debug-5nk9k" Dec 05 20:14:30 crc kubenswrapper[4828]: I1205 20:14:30.735769 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f29cx\" (UniqueName: \"kubernetes.io/projected/815acd85-7003-4c3d-b2c6-21bac99f762f-kube-api-access-f29cx\") pod \"crc-debug-5nk9k\" (UID: \"815acd85-7003-4c3d-b2c6-21bac99f762f\") " pod="openshift-must-gather-wgdgv/crc-debug-5nk9k" Dec 05 20:14:30 crc kubenswrapper[4828]: I1205 20:14:30.735845 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/815acd85-7003-4c3d-b2c6-21bac99f762f-host\") pod \"crc-debug-5nk9k\" (UID: \"815acd85-7003-4c3d-b2c6-21bac99f762f\") " pod="openshift-must-gather-wgdgv/crc-debug-5nk9k" Dec 05 20:14:30 crc kubenswrapper[4828]: I1205 20:14:30.837251 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f29cx\" (UniqueName: \"kubernetes.io/projected/815acd85-7003-4c3d-b2c6-21bac99f762f-kube-api-access-f29cx\") pod \"crc-debug-5nk9k\" (UID: \"815acd85-7003-4c3d-b2c6-21bac99f762f\") " pod="openshift-must-gather-wgdgv/crc-debug-5nk9k" Dec 05 20:14:30 crc kubenswrapper[4828]: I1205 20:14:30.837612 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/815acd85-7003-4c3d-b2c6-21bac99f762f-host\") pod \"crc-debug-5nk9k\" (UID: \"815acd85-7003-4c3d-b2c6-21bac99f762f\") " pod="openshift-must-gather-wgdgv/crc-debug-5nk9k" Dec 05 20:14:30 crc kubenswrapper[4828]: I1205 20:14:30.837708 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/815acd85-7003-4c3d-b2c6-21bac99f762f-host\") pod \"crc-debug-5nk9k\" (UID: \"815acd85-7003-4c3d-b2c6-21bac99f762f\") " pod="openshift-must-gather-wgdgv/crc-debug-5nk9k" Dec 05 20:14:30 crc kubenswrapper[4828]: I1205 20:14:30.858858 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f29cx\" (UniqueName: \"kubernetes.io/projected/815acd85-7003-4c3d-b2c6-21bac99f762f-kube-api-access-f29cx\") pod \"crc-debug-5nk9k\" (UID: \"815acd85-7003-4c3d-b2c6-21bac99f762f\") " pod="openshift-must-gather-wgdgv/crc-debug-5nk9k" Dec 05 20:14:30 crc kubenswrapper[4828]: I1205 20:14:30.979676 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wgdgv/crc-debug-5nk9k" Dec 05 20:14:31 crc kubenswrapper[4828]: I1205 20:14:31.010123 4828 scope.go:117] "RemoveContainer" containerID="3f7aa654c4a6aed63ac2c89298c55871ec5c6da782e1ac1214b7fc4403cea147" Dec 05 20:14:31 crc kubenswrapper[4828]: I1205 20:14:31.010310 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wgdgv/crc-debug-zljg9" Dec 05 20:14:32 crc kubenswrapper[4828]: I1205 20:14:32.022590 4828 generic.go:334] "Generic (PLEG): container finished" podID="815acd85-7003-4c3d-b2c6-21bac99f762f" containerID="1bfe87cd1bbfc90344026c2e7928ac9d4d5f17df23929a059d4b8191868744a2" exitCode=0 Dec 05 20:14:32 crc kubenswrapper[4828]: I1205 20:14:32.022681 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wgdgv/crc-debug-5nk9k" event={"ID":"815acd85-7003-4c3d-b2c6-21bac99f762f","Type":"ContainerDied","Data":"1bfe87cd1bbfc90344026c2e7928ac9d4d5f17df23929a059d4b8191868744a2"} Dec 05 20:14:32 crc kubenswrapper[4828]: I1205 20:14:32.023438 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wgdgv/crc-debug-5nk9k" event={"ID":"815acd85-7003-4c3d-b2c6-21bac99f762f","Type":"ContainerStarted","Data":"bf17bcd15534950c4965a896b4ea9586a41bb6bab60cf9b921be0694d5c0d688"} Dec 05 20:14:32 crc kubenswrapper[4828]: I1205 20:14:32.069782 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wgdgv/crc-debug-5nk9k"] Dec 05 20:14:32 crc kubenswrapper[4828]: I1205 20:14:32.085373 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wgdgv/crc-debug-5nk9k"] Dec 05 20:14:33 crc kubenswrapper[4828]: I1205 20:14:33.132287 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wgdgv/crc-debug-5nk9k" Dec 05 20:14:33 crc kubenswrapper[4828]: I1205 20:14:33.176620 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/815acd85-7003-4c3d-b2c6-21bac99f762f-host\") pod \"815acd85-7003-4c3d-b2c6-21bac99f762f\" (UID: \"815acd85-7003-4c3d-b2c6-21bac99f762f\") " Dec 05 20:14:33 crc kubenswrapper[4828]: I1205 20:14:33.176772 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f29cx\" (UniqueName: \"kubernetes.io/projected/815acd85-7003-4c3d-b2c6-21bac99f762f-kube-api-access-f29cx\") pod \"815acd85-7003-4c3d-b2c6-21bac99f762f\" (UID: \"815acd85-7003-4c3d-b2c6-21bac99f762f\") " Dec 05 20:14:33 crc kubenswrapper[4828]: I1205 20:14:33.176775 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/815acd85-7003-4c3d-b2c6-21bac99f762f-host" (OuterVolumeSpecName: "host") pod "815acd85-7003-4c3d-b2c6-21bac99f762f" (UID: "815acd85-7003-4c3d-b2c6-21bac99f762f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 20:14:33 crc kubenswrapper[4828]: I1205 20:14:33.177281 4828 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/815acd85-7003-4c3d-b2c6-21bac99f762f-host\") on node \"crc\" DevicePath \"\"" Dec 05 20:14:33 crc kubenswrapper[4828]: I1205 20:14:33.184471 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/815acd85-7003-4c3d-b2c6-21bac99f762f-kube-api-access-f29cx" (OuterVolumeSpecName: "kube-api-access-f29cx") pod "815acd85-7003-4c3d-b2c6-21bac99f762f" (UID: "815acd85-7003-4c3d-b2c6-21bac99f762f"). InnerVolumeSpecName "kube-api-access-f29cx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 20:14:33 crc kubenswrapper[4828]: I1205 20:14:33.278841 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f29cx\" (UniqueName: \"kubernetes.io/projected/815acd85-7003-4c3d-b2c6-21bac99f762f-kube-api-access-f29cx\") on node \"crc\" DevicePath \"\"" Dec 05 20:14:34 crc kubenswrapper[4828]: I1205 20:14:34.043437 4828 scope.go:117] "RemoveContainer" containerID="1bfe87cd1bbfc90344026c2e7928ac9d4d5f17df23929a059d4b8191868744a2" Dec 05 20:14:34 crc kubenswrapper[4828]: I1205 20:14:34.043490 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wgdgv/crc-debug-5nk9k" Dec 05 20:14:34 crc kubenswrapper[4828]: I1205 20:14:34.455747 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="815acd85-7003-4c3d-b2c6-21bac99f762f" path="/var/lib/kubelet/pods/815acd85-7003-4c3d-b2c6-21bac99f762f/volumes" Dec 05 20:14:35 crc kubenswrapper[4828]: I1205 20:14:35.260044 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 20:14:35 crc kubenswrapper[4828]: I1205 20:14:35.260132 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 20:15:00 crc kubenswrapper[4828]: I1205 20:15:00.195673 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416095-qnhf2"] Dec 05 20:15:00 crc kubenswrapper[4828]: E1205 20:15:00.196551 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815acd85-7003-4c3d-b2c6-21bac99f762f" containerName="container-00" Dec 05 20:15:00 crc kubenswrapper[4828]: I1205 20:15:00.196565 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="815acd85-7003-4c3d-b2c6-21bac99f762f" containerName="container-00" Dec 05 20:15:00 crc kubenswrapper[4828]: I1205 20:15:00.196761 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="815acd85-7003-4c3d-b2c6-21bac99f762f" containerName="container-00" Dec 05 20:15:00 crc kubenswrapper[4828]: I1205 20:15:00.197611 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416095-qnhf2" Dec 05 20:15:00 crc kubenswrapper[4828]: I1205 20:15:00.199571 4828 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 20:15:00 crc kubenswrapper[4828]: I1205 20:15:00.200006 4828 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 20:15:00 crc kubenswrapper[4828]: I1205 20:15:00.212703 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416095-qnhf2"] Dec 05 20:15:00 crc kubenswrapper[4828]: I1205 20:15:00.253096 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5beff367-77cf-40c7-bd86-210de00f2c6a-config-volume\") pod \"collect-profiles-29416095-qnhf2\" (UID: \"5beff367-77cf-40c7-bd86-210de00f2c6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416095-qnhf2" Dec 05 20:15:00 crc kubenswrapper[4828]: I1205 20:15:00.253149 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5beff367-77cf-40c7-bd86-210de00f2c6a-secret-volume\") pod \"collect-profiles-29416095-qnhf2\" (UID: \"5beff367-77cf-40c7-bd86-210de00f2c6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416095-qnhf2" Dec 05 20:15:00 crc kubenswrapper[4828]: I1205 20:15:00.253428 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg5vm\" (UniqueName: \"kubernetes.io/projected/5beff367-77cf-40c7-bd86-210de00f2c6a-kube-api-access-xg5vm\") pod \"collect-profiles-29416095-qnhf2\" (UID: \"5beff367-77cf-40c7-bd86-210de00f2c6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416095-qnhf2" Dec 05 20:15:00 crc kubenswrapper[4828]: I1205 20:15:00.324142 4828 generic.go:334] "Generic (PLEG): container finished" podID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" containerID="669b952c3ede85f0975d95c6d6a646122e5087b1c3258c23ed6e56f0390472d9" exitCode=1 Dec 05 20:15:00 crc kubenswrapper[4828]: I1205 20:15:00.324187 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" event={"ID":"03c4fc5d-6be1-47b4-9c39-7bb86046dafd","Type":"ContainerDied","Data":"669b952c3ede85f0975d95c6d6a646122e5087b1c3258c23ed6e56f0390472d9"} Dec 05 20:15:00 crc kubenswrapper[4828]: I1205 20:15:00.324270 4828 scope.go:117] "RemoveContainer" containerID="49b68706ff6cb133e98a8e73e383231ef31d28f2048427edba9ba321e824e434" Dec 05 20:15:00 crc kubenswrapper[4828]: I1205 20:15:00.324816 4828 scope.go:117] "RemoveContainer" containerID="669b952c3ede85f0975d95c6d6a646122e5087b1c3258c23ed6e56f0390472d9" Dec 05 20:15:00 crc kubenswrapper[4828]: E1205 20:15:00.325076 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:15:00 crc kubenswrapper[4828]: I1205 20:15:00.355206 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5beff367-77cf-40c7-bd86-210de00f2c6a-secret-volume\") pod \"collect-profiles-29416095-qnhf2\" (UID: \"5beff367-77cf-40c7-bd86-210de00f2c6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416095-qnhf2" Dec 05 20:15:00 crc kubenswrapper[4828]: I1205 20:15:00.355318 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg5vm\" (UniqueName: \"kubernetes.io/projected/5beff367-77cf-40c7-bd86-210de00f2c6a-kube-api-access-xg5vm\") pod \"collect-profiles-29416095-qnhf2\" (UID: \"5beff367-77cf-40c7-bd86-210de00f2c6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416095-qnhf2" Dec 05 20:15:00 crc kubenswrapper[4828]: I1205 20:15:00.355489 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5beff367-77cf-40c7-bd86-210de00f2c6a-config-volume\") pod \"collect-profiles-29416095-qnhf2\" (UID: \"5beff367-77cf-40c7-bd86-210de00f2c6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416095-qnhf2" Dec 05 20:15:00 crc kubenswrapper[4828]: I1205 20:15:00.356319 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5beff367-77cf-40c7-bd86-210de00f2c6a-config-volume\") pod \"collect-profiles-29416095-qnhf2\" (UID: \"5beff367-77cf-40c7-bd86-210de00f2c6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416095-qnhf2" Dec 05 20:15:00 crc kubenswrapper[4828]: I1205 20:15:00.363786 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5beff367-77cf-40c7-bd86-210de00f2c6a-secret-volume\") pod \"collect-profiles-29416095-qnhf2\" (UID: \"5beff367-77cf-40c7-bd86-210de00f2c6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416095-qnhf2" Dec 05 20:15:00 crc kubenswrapper[4828]: I1205 20:15:00.383903 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg5vm\" (UniqueName: \"kubernetes.io/projected/5beff367-77cf-40c7-bd86-210de00f2c6a-kube-api-access-xg5vm\") pod \"collect-profiles-29416095-qnhf2\" (UID: \"5beff367-77cf-40c7-bd86-210de00f2c6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416095-qnhf2" Dec 05 20:15:00 crc kubenswrapper[4828]: I1205 20:15:00.523229 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416095-qnhf2" Dec 05 20:15:00 crc kubenswrapper[4828]: I1205 20:15:00.554016 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-8769b7dc8-87tcr_7b34386c-5f6a-420f-8889-5dd31e8560c0/barbican-api/0.log" Dec 05 20:15:00 crc kubenswrapper[4828]: I1205 20:15:00.665568 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-8769b7dc8-87tcr_7b34386c-5f6a-420f-8889-5dd31e8560c0/barbican-api-log/0.log" Dec 05 20:15:00 crc kubenswrapper[4828]: I1205 20:15:00.749458 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-c9967b7f4-tjx24_e1a17074-48bc-4f34-8a44-dd1321ff8fc1/barbican-keystone-listener/0.log" Dec 05 20:15:00 crc kubenswrapper[4828]: I1205 20:15:00.806744 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-c9967b7f4-tjx24_e1a17074-48bc-4f34-8a44-dd1321ff8fc1/barbican-keystone-listener-log/0.log" Dec 05 20:15:00 crc kubenswrapper[4828]: I1205 20:15:00.946240 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6d6b94f97f-2m6zj_ba560005-dff7-4d93-b2aa-58d922405ff3/barbican-worker/0.log" Dec 05 20:15:00 crc kubenswrapper[4828]: I1205 20:15:00.966158 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6d6b94f97f-2m6zj_ba560005-dff7-4d93-b2aa-58d922405ff3/barbican-worker-log/0.log" Dec 05 20:15:00 crc kubenswrapper[4828]: I1205 20:15:00.991683 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416095-qnhf2"] Dec 05 20:15:01 crc kubenswrapper[4828]: I1205 20:15:01.653185 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-m6sxr_f959e321-6568-4dd3-8c87-0ebb49d9c517/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:15:01 crc kubenswrapper[4828]: I1205 20:15:01.739143 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1984b123-aa0e-4af4-a396-76c783a22b45/ceilometer-notification-agent/0.log" Dec 05 20:15:01 crc kubenswrapper[4828]: I1205 20:15:01.743256 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1984b123-aa0e-4af4-a396-76c783a22b45/ceilometer-central-agent/0.log" Dec 05 20:15:01 crc kubenswrapper[4828]: I1205 20:15:01.916311 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1984b123-aa0e-4af4-a396-76c783a22b45/proxy-httpd/0.log" Dec 05 20:15:01 crc kubenswrapper[4828]: I1205 20:15:01.966904 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1984b123-aa0e-4af4-a396-76c783a22b45/sg-core/0.log" Dec 05 20:15:02 crc kubenswrapper[4828]: I1205 20:15:02.061203 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_55a11269-8096-4009-a3b0-44f7d554fe4f/cinder-api/0.log" Dec 05 20:15:02 crc kubenswrapper[4828]: I1205 20:15:02.152733 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_55a11269-8096-4009-a3b0-44f7d554fe4f/cinder-api-log/0.log" Dec 05 20:15:02 crc kubenswrapper[4828]: I1205 20:15:02.292615 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_714c55c4-ac9a-4e63-8159-04f311676ad5/probe/0.log" Dec 05 20:15:02 crc kubenswrapper[4828]: I1205 20:15:02.320065 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_714c55c4-ac9a-4e63-8159-04f311676ad5/cinder-scheduler/0.log" Dec 05 20:15:02 crc kubenswrapper[4828]: I1205 20:15:02.346327 4828 generic.go:334] "Generic (PLEG): container finished" podID="5beff367-77cf-40c7-bd86-210de00f2c6a" containerID="de8aba035fcc47b42ce992838ec23a53f33f44c65200c591d5c3b397069f146c" exitCode=0 Dec 05 20:15:02 crc kubenswrapper[4828]: I1205 20:15:02.346423 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416095-qnhf2" event={"ID":"5beff367-77cf-40c7-bd86-210de00f2c6a","Type":"ContainerDied","Data":"de8aba035fcc47b42ce992838ec23a53f33f44c65200c591d5c3b397069f146c"} Dec 05 20:15:02 crc kubenswrapper[4828]: I1205 20:15:02.346742 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416095-qnhf2" event={"ID":"5beff367-77cf-40c7-bd86-210de00f2c6a","Type":"ContainerStarted","Data":"af513a580b814cbb4333ea9f3bae2eeab62a5c70c8839f1f5742982e57e82800"} Dec 05 20:15:02 crc kubenswrapper[4828]: I1205 20:15:02.440017 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-qzpdz_814c8a59-108d-4ee6-943c-2f4e11294f14/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:15:02 crc kubenswrapper[4828]: I1205 20:15:02.590470 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-mjr9f_3f1c3024-3679-435b-9252-3cd35ee43b4b/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:15:02 crc kubenswrapper[4828]: I1205 20:15:02.718935 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-w4hmm_f77804ae-0e68-40a3-bbd8-5dac2e64eedf/init/0.log" Dec 05 20:15:03 crc kubenswrapper[4828]: I1205 20:15:03.252529 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-w4hmm_f77804ae-0e68-40a3-bbd8-5dac2e64eedf/init/0.log" Dec 05 20:15:03 crc kubenswrapper[4828]: I1205 20:15:03.365568 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-w4hmm_f77804ae-0e68-40a3-bbd8-5dac2e64eedf/dnsmasq-dns/0.log" Dec 05 20:15:03 crc kubenswrapper[4828]: I1205 20:15:03.435782 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-dcrds_04bf9e49-2000-4a46-81a8-3dc1ef7c352f/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:15:03 crc kubenswrapper[4828]: I1205 20:15:03.608416 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8ee563d9-a334-428c-8d24-b0b1438e8ee8/glance-httpd/0.log" Dec 05 20:15:03 crc kubenswrapper[4828]: I1205 20:15:03.696874 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8ee563d9-a334-428c-8d24-b0b1438e8ee8/glance-log/0.log" Dec 05 20:15:03 crc kubenswrapper[4828]: I1205 20:15:03.818511 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416095-qnhf2" Dec 05 20:15:03 crc kubenswrapper[4828]: I1205 20:15:03.847495 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_a7126f93-6b58-41ae-8f7a-b86281398e90/glance-log/0.log" Dec 05 20:15:03 crc kubenswrapper[4828]: I1205 20:15:03.851298 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_a7126f93-6b58-41ae-8f7a-b86281398e90/glance-httpd/0.log" Dec 05 20:15:03 crc kubenswrapper[4828]: I1205 20:15:03.870365 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5beff367-77cf-40c7-bd86-210de00f2c6a-config-volume\") pod \"5beff367-77cf-40c7-bd86-210de00f2c6a\" (UID: \"5beff367-77cf-40c7-bd86-210de00f2c6a\") " Dec 05 20:15:03 crc kubenswrapper[4828]: I1205 20:15:03.870510 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5beff367-77cf-40c7-bd86-210de00f2c6a-secret-volume\") pod \"5beff367-77cf-40c7-bd86-210de00f2c6a\" (UID: \"5beff367-77cf-40c7-bd86-210de00f2c6a\") " Dec 05 20:15:03 crc kubenswrapper[4828]: I1205 20:15:03.870559 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg5vm\" (UniqueName: \"kubernetes.io/projected/5beff367-77cf-40c7-bd86-210de00f2c6a-kube-api-access-xg5vm\") pod \"5beff367-77cf-40c7-bd86-210de00f2c6a\" (UID: \"5beff367-77cf-40c7-bd86-210de00f2c6a\") " Dec 05 20:15:03 crc kubenswrapper[4828]: I1205 20:15:03.872899 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5beff367-77cf-40c7-bd86-210de00f2c6a-config-volume" (OuterVolumeSpecName: "config-volume") pod "5beff367-77cf-40c7-bd86-210de00f2c6a" (UID: "5beff367-77cf-40c7-bd86-210de00f2c6a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 20:15:03 crc kubenswrapper[4828]: I1205 20:15:03.879998 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5beff367-77cf-40c7-bd86-210de00f2c6a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5beff367-77cf-40c7-bd86-210de00f2c6a" (UID: "5beff367-77cf-40c7-bd86-210de00f2c6a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 20:15:03 crc kubenswrapper[4828]: I1205 20:15:03.882159 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5beff367-77cf-40c7-bd86-210de00f2c6a-kube-api-access-xg5vm" (OuterVolumeSpecName: "kube-api-access-xg5vm") pod "5beff367-77cf-40c7-bd86-210de00f2c6a" (UID: "5beff367-77cf-40c7-bd86-210de00f2c6a"). InnerVolumeSpecName "kube-api-access-xg5vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 20:15:03 crc kubenswrapper[4828]: I1205 20:15:03.973449 4828 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5beff367-77cf-40c7-bd86-210de00f2c6a-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 20:15:03 crc kubenswrapper[4828]: I1205 20:15:03.973486 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg5vm\" (UniqueName: \"kubernetes.io/projected/5beff367-77cf-40c7-bd86-210de00f2c6a-kube-api-access-xg5vm\") on node \"crc\" DevicePath \"\"" Dec 05 20:15:03 crc kubenswrapper[4828]: I1205 20:15:03.973499 4828 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5beff367-77cf-40c7-bd86-210de00f2c6a-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 20:15:04 crc kubenswrapper[4828]: I1205 20:15:04.071179 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-594b9fb44-r9zh6_99c01665-feb9-49f7-a97a-b6e6d87dc991/horizon/0.log" Dec 05 20:15:04 crc kubenswrapper[4828]: I1205 20:15:04.237262 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-b4nx8_ab033631-5ea0-4fce-a4e3-3f0c390f07ac/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:15:04 crc kubenswrapper[4828]: I1205 20:15:04.434173 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416095-qnhf2" event={"ID":"5beff367-77cf-40c7-bd86-210de00f2c6a","Type":"ContainerDied","Data":"af513a580b814cbb4333ea9f3bae2eeab62a5c70c8839f1f5742982e57e82800"} Dec 05 20:15:04 crc kubenswrapper[4828]: I1205 20:15:04.434220 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af513a580b814cbb4333ea9f3bae2eeab62a5c70c8839f1f5742982e57e82800" Dec 05 20:15:04 crc kubenswrapper[4828]: I1205 20:15:04.434285 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416095-qnhf2" Dec 05 20:15:04 crc kubenswrapper[4828]: I1205 20:15:04.489911 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-bctgl_96658594-f9dc-4bc6-8d77-3db81db8d2fd/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:15:04 crc kubenswrapper[4828]: I1205 20:15:04.502088 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-594b9fb44-r9zh6_99c01665-feb9-49f7-a97a-b6e6d87dc991/horizon-log/0.log" Dec 05 20:15:04 crc kubenswrapper[4828]: I1205 20:15:04.720520 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29416081-mbbpn_d30b1521-4341-40f3-8952-8e0d03fc192b/keystone-cron/0.log" Dec 05 20:15:04 crc kubenswrapper[4828]: I1205 20:15:04.727276 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-89dcf679f-97rfx_49b5bac7-ab0b-4ddc-a047-8ae18b51a9b4/keystone-api/0.log" Dec 05 20:15:04 crc kubenswrapper[4828]: I1205 20:15:04.736456 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_d2de7f1c-8c50-41e1-be30-ce169c261e65/kube-state-metrics/0.log" Dec 05 20:15:04 crc kubenswrapper[4828]: I1205 20:15:04.911797 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416050-kp4hz"] Dec 05 20:15:04 crc kubenswrapper[4828]: I1205 20:15:04.921006 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416050-kp4hz"] Dec 05 20:15:04 crc kubenswrapper[4828]: I1205 20:15:04.960642 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-ddgp5_b46bef7a-7a08-49f8-a4ff-d6fae6ac588e/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:15:05 crc kubenswrapper[4828]: I1205 20:15:05.117682 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 20:15:05 crc kubenswrapper[4828]: I1205 20:15:05.118503 4828 scope.go:117] "RemoveContainer" containerID="669b952c3ede85f0975d95c6d6a646122e5087b1c3258c23ed6e56f0390472d9" Dec 05 20:15:05 crc kubenswrapper[4828]: E1205 20:15:05.118777 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:15:05 crc kubenswrapper[4828]: I1205 20:15:05.119387 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 20:15:05 crc kubenswrapper[4828]: I1205 20:15:05.259294 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 20:15:05 crc kubenswrapper[4828]: I1205 20:15:05.259359 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 20:15:05 crc kubenswrapper[4828]: I1205 20:15:05.430549 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6dc6b5c7cf-8mlhh_f336b9b2-7051-43e5-8b10-ad9cab15c947/neutron-api/0.log" Dec 05 20:15:05 crc kubenswrapper[4828]: I1205 20:15:05.444331 4828 scope.go:117] "RemoveContainer" containerID="669b952c3ede85f0975d95c6d6a646122e5087b1c3258c23ed6e56f0390472d9" Dec 05 20:15:05 crc kubenswrapper[4828]: E1205 20:15:05.444649 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:15:05 crc kubenswrapper[4828]: I1205 20:15:05.559459 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6dc6b5c7cf-8mlhh_f336b9b2-7051-43e5-8b10-ad9cab15c947/neutron-httpd/0.log" Dec 05 20:15:05 crc kubenswrapper[4828]: I1205 20:15:05.563934 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5j2_02cb0b69-3011-491e-8081-0ee1a0053610/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:15:06 crc kubenswrapper[4828]: I1205 20:15:06.195198 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e8beb365-61f2-42bf-be67-af226900e81c/nova-api-log/0.log" Dec 05 20:15:06 crc kubenswrapper[4828]: I1205 20:15:06.210798 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_6f14d5c3-1d7a-4efc-90f3-e97d2cb4098d/nova-cell0-conductor-conductor/0.log" Dec 05 20:15:06 crc kubenswrapper[4828]: I1205 20:15:06.472780 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37f504a5-cc40-4bc6-9f02-4400004a3dce" path="/var/lib/kubelet/pods/37f504a5-cc40-4bc6-9f02-4400004a3dce/volumes" Dec 05 20:15:06 crc kubenswrapper[4828]: I1205 20:15:06.517590 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_5f2a95e9-2c8f-4c04-b9f8-546e8a09aa7b/nova-cell1-conductor-conductor/0.log" Dec 05 20:15:06 crc kubenswrapper[4828]: I1205 20:15:06.612257 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e8beb365-61f2-42bf-be67-af226900e81c/nova-api-api/0.log" Dec 05 20:15:06 crc kubenswrapper[4828]: I1205 20:15:06.661336 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_ddf78da4-c3d6-41b2-b8e1-803e3f075586/nova-cell1-novncproxy-novncproxy/0.log" Dec 05 20:15:06 crc kubenswrapper[4828]: I1205 20:15:06.739562 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-chk7b_b730436a-244c-4d2f-8e29-ca230cfe4921/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:15:07 crc kubenswrapper[4828]: I1205 20:15:07.016503 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d0bf5ea5-86ef-400d-a033-4bb5c31f61df/nova-metadata-log/0.log" Dec 05 20:15:07 crc kubenswrapper[4828]: I1205 20:15:07.241312 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e6c44e1b-fe99-4645-894d-8f7c89ec0ed2/nova-scheduler-scheduler/0.log" Dec 05 20:15:07 crc kubenswrapper[4828]: I1205 20:15:07.274011 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7064b569-c206-4ed9-8f28-3e5a7e92bf79/mysql-bootstrap/0.log" Dec 05 20:15:07 crc kubenswrapper[4828]: I1205 20:15:07.440340 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7064b569-c206-4ed9-8f28-3e5a7e92bf79/mysql-bootstrap/0.log" Dec 05 20:15:07 crc kubenswrapper[4828]: I1205 20:15:07.484350 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7064b569-c206-4ed9-8f28-3e5a7e92bf79/galera/0.log" Dec 05 20:15:07 crc kubenswrapper[4828]: I1205 20:15:07.681689 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a2debacb-a691-43ee-aa79-670bbec2a98a/mysql-bootstrap/0.log" Dec 05 20:15:07 crc kubenswrapper[4828]: I1205 20:15:07.855401 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a2debacb-a691-43ee-aa79-670bbec2a98a/mysql-bootstrap/0.log" Dec 05 20:15:07 crc kubenswrapper[4828]: I1205 20:15:07.857808 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a2debacb-a691-43ee-aa79-670bbec2a98a/galera/0.log" Dec 05 20:15:08 crc kubenswrapper[4828]: I1205 20:15:08.064710 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_847a8779-d691-4659-9166-a8f39abb55f4/openstackclient/0.log" Dec 05 20:15:08 crc kubenswrapper[4828]: I1205 20:15:08.162037 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-gqdhj_4ba9cffc-5e2b-44e9-966a-833ab0de45eb/openstack-network-exporter/0.log" Dec 05 20:15:08 crc kubenswrapper[4828]: I1205 20:15:08.266868 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d0bf5ea5-86ef-400d-a033-4bb5c31f61df/nova-metadata-metadata/0.log" Dec 05 20:15:08 crc kubenswrapper[4828]: I1205 20:15:08.339714 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l467t_3b912679-3c5e-4511-8769-8b8b4923d9fd/ovsdb-server-init/0.log" Dec 05 20:15:08 crc kubenswrapper[4828]: I1205 20:15:08.502411 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l467t_3b912679-3c5e-4511-8769-8b8b4923d9fd/ovsdb-server-init/0.log" Dec 05 20:15:08 crc kubenswrapper[4828]: I1205 20:15:08.538093 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l467t_3b912679-3c5e-4511-8769-8b8b4923d9fd/ovs-vswitchd/0.log" Dec 05 20:15:08 crc kubenswrapper[4828]: I1205 20:15:08.555089 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l467t_3b912679-3c5e-4511-8769-8b8b4923d9fd/ovsdb-server/0.log" Dec 05 20:15:08 crc kubenswrapper[4828]: I1205 20:15:08.684288 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-s6jdb_f88a4161-1271-4374-9740-eaea879d6561/ovn-controller/0.log" Dec 05 20:15:08 crc kubenswrapper[4828]: I1205 20:15:08.813626 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-57rf6_0ce437eb-13b3-49a9-adcf-874e3e672a8c/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:15:08 crc kubenswrapper[4828]: I1205 20:15:08.904674 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_bc0e095c-680f-45ec-96b2-3713515bc9c3/openstack-network-exporter/0.log" Dec 05 20:15:08 crc kubenswrapper[4828]: I1205 20:15:08.990608 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_bc0e095c-680f-45ec-96b2-3713515bc9c3/ovn-northd/0.log" Dec 05 20:15:09 crc kubenswrapper[4828]: I1205 20:15:09.099029 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7ac00d92-7825-4462-ab12-8d2059085d24/ovsdbserver-nb/0.log" Dec 05 20:15:09 crc kubenswrapper[4828]: I1205 20:15:09.130767 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7ac00d92-7825-4462-ab12-8d2059085d24/openstack-network-exporter/0.log" Dec 05 20:15:09 crc kubenswrapper[4828]: I1205 20:15:09.276045 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_31b675bd-ec74-4876-91a0-95e4180e8cab/ovsdbserver-sb/0.log" Dec 05 20:15:09 crc kubenswrapper[4828]: I1205 20:15:09.289914 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_31b675bd-ec74-4876-91a0-95e4180e8cab/openstack-network-exporter/0.log" Dec 05 20:15:09 crc kubenswrapper[4828]: I1205 20:15:09.521435 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6bfcc469f6-vtpj6_a9e67cf9-61e2-43a1-867a-a8f97ada16a4/placement-api/0.log" Dec 05 20:15:09 crc kubenswrapper[4828]: I1205 20:15:09.663161 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_97ef01a4-c35c-41a0-abf1-1fbb83ff67e6/setup-container/0.log" Dec 05 20:15:09 crc kubenswrapper[4828]: I1205 20:15:09.676013 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6bfcc469f6-vtpj6_a9e67cf9-61e2-43a1-867a-a8f97ada16a4/placement-log/0.log" Dec 05 20:15:09 crc kubenswrapper[4828]: I1205 20:15:09.985235 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_97ef01a4-c35c-41a0-abf1-1fbb83ff67e6/rabbitmq/0.log" Dec 05 20:15:10 crc kubenswrapper[4828]: I1205 20:15:10.030650 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_63ac6b69-a1ea-4b8d-9532-679d79cd1a87/setup-container/0.log" Dec 05 20:15:10 crc kubenswrapper[4828]: I1205 20:15:10.063898 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_97ef01a4-c35c-41a0-abf1-1fbb83ff67e6/setup-container/0.log" Dec 05 20:15:10 crc kubenswrapper[4828]: I1205 20:15:10.265973 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_63ac6b69-a1ea-4b8d-9532-679d79cd1a87/setup-container/0.log" Dec 05 20:15:10 crc kubenswrapper[4828]: I1205 20:15:10.284356 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_63ac6b69-a1ea-4b8d-9532-679d79cd1a87/rabbitmq/0.log" Dec 05 20:15:10 crc kubenswrapper[4828]: I1205 20:15:10.334590 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-txr7g_f51d93aa-b89c-4da8-b091-8a9888820e61/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:15:10 crc kubenswrapper[4828]: I1205 20:15:10.914335 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-5fhnz_096e625a-8244-411f-aaad-9746cf1e1878/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:15:10 crc kubenswrapper[4828]: I1205 20:15:10.961681 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-mt7ss_a2df868b-dc23-4623-9203-42c91c9ff35b/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:15:11 crc kubenswrapper[4828]: I1205 20:15:11.190467 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-n5lc2_9b0ec9c6-c67f-45f2-be21-251c97a44a7e/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:15:11 crc kubenswrapper[4828]: I1205 20:15:11.192237 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-f242c_c7014005-08da-4204-96f5-163111e61315/ssh-known-hosts-edpm-deployment/0.log" Dec 05 20:15:11 crc kubenswrapper[4828]: I1205 20:15:11.465571 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7b86fcf7f7-wb4rw_6cac2917-5dee-4c64-a745-42e811cd735f/proxy-server/0.log" Dec 05 20:15:11 crc kubenswrapper[4828]: I1205 20:15:11.513445 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7b86fcf7f7-wb4rw_6cac2917-5dee-4c64-a745-42e811cd735f/proxy-httpd/0.log" Dec 05 20:15:11 crc kubenswrapper[4828]: I1205 20:15:11.659262 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-vs6cm_784df3ad-b111-476d-ad5c-e10ee3e04b2f/swift-ring-rebalance/0.log" Dec 05 20:15:11 crc kubenswrapper[4828]: I1205 20:15:11.721098 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_821554f9-a51e-4a16-a053-b8bc18d93a9e/account-auditor/0.log" Dec 05 20:15:11 crc kubenswrapper[4828]: I1205 20:15:11.747241 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_821554f9-a51e-4a16-a053-b8bc18d93a9e/account-reaper/0.log" Dec 05 20:15:11 crc kubenswrapper[4828]: I1205 20:15:11.897954 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_821554f9-a51e-4a16-a053-b8bc18d93a9e/account-replicator/0.log" Dec 05 20:15:11 crc kubenswrapper[4828]: I1205 20:15:11.907322 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_821554f9-a51e-4a16-a053-b8bc18d93a9e/container-auditor/0.log" Dec 05 20:15:11 crc kubenswrapper[4828]: I1205 20:15:11.916303 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_821554f9-a51e-4a16-a053-b8bc18d93a9e/account-server/0.log" Dec 05 20:15:12 crc kubenswrapper[4828]: I1205 20:15:12.013099 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_821554f9-a51e-4a16-a053-b8bc18d93a9e/container-replicator/0.log" Dec 05 20:15:12 crc kubenswrapper[4828]: I1205 20:15:12.128335 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_821554f9-a51e-4a16-a053-b8bc18d93a9e/container-server/0.log" Dec 05 20:15:12 crc kubenswrapper[4828]: I1205 20:15:12.138115 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_821554f9-a51e-4a16-a053-b8bc18d93a9e/container-updater/0.log" Dec 05 20:15:12 crc kubenswrapper[4828]: I1205 20:15:12.163435 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_821554f9-a51e-4a16-a053-b8bc18d93a9e/object-auditor/0.log" Dec 05 20:15:12 crc kubenswrapper[4828]: I1205 20:15:12.262759 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_821554f9-a51e-4a16-a053-b8bc18d93a9e/object-expirer/0.log" Dec 05 20:15:12 crc kubenswrapper[4828]: I1205 20:15:12.756503 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_821554f9-a51e-4a16-a053-b8bc18d93a9e/object-updater/0.log" Dec 05 20:15:12 crc kubenswrapper[4828]: I1205 20:15:12.760595 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_821554f9-a51e-4a16-a053-b8bc18d93a9e/object-server/0.log" Dec 05 20:15:12 crc kubenswrapper[4828]: I1205 20:15:12.776876 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_821554f9-a51e-4a16-a053-b8bc18d93a9e/object-replicator/0.log" Dec 05 20:15:12 crc kubenswrapper[4828]: I1205 20:15:12.797179 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_821554f9-a51e-4a16-a053-b8bc18d93a9e/rsync/0.log" Dec 05 20:15:12 crc kubenswrapper[4828]: I1205 20:15:12.983845 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_821554f9-a51e-4a16-a053-b8bc18d93a9e/swift-recon-cron/0.log" Dec 05 20:15:13 crc kubenswrapper[4828]: I1205 20:15:13.022056 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-m7x8b_e0dde2a7-439b-4b5a-8e4b-363089a9879a/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:15:13 crc kubenswrapper[4828]: I1205 20:15:13.241158 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_9d71b946-ed36-403c-9faf-feb03f741474/tempest-tests-tempest-tests-runner/0.log" Dec 05 20:15:13 crc kubenswrapper[4828]: I1205 20:15:13.284295 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_9fe13abd-7133-4370-a848-17cea54271e1/test-operator-logs-container/0.log" Dec 05 20:15:13 crc kubenswrapper[4828]: I1205 20:15:13.447443 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-vxgpz_a516aad0-97c7-46b3-b692-660dbd380bff/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 20:15:16 crc kubenswrapper[4828]: I1205 20:15:16.271908 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lv7rv"] Dec 05 20:15:16 crc kubenswrapper[4828]: E1205 20:15:16.272877 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5beff367-77cf-40c7-bd86-210de00f2c6a" containerName="collect-profiles" Dec 05 20:15:16 crc kubenswrapper[4828]: I1205 20:15:16.272894 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="5beff367-77cf-40c7-bd86-210de00f2c6a" containerName="collect-profiles" Dec 05 20:15:16 crc kubenswrapper[4828]: I1205 20:15:16.273105 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="5beff367-77cf-40c7-bd86-210de00f2c6a" containerName="collect-profiles" Dec 05 20:15:16 crc kubenswrapper[4828]: I1205 20:15:16.274461 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lv7rv" Dec 05 20:15:16 crc kubenswrapper[4828]: I1205 20:15:16.290210 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lv7rv"] Dec 05 20:15:16 crc kubenswrapper[4828]: I1205 20:15:16.466151 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e81b5974-0c81-49f9-a27b-a7a725e526e5-utilities\") pod \"community-operators-lv7rv\" (UID: \"e81b5974-0c81-49f9-a27b-a7a725e526e5\") " pod="openshift-marketplace/community-operators-lv7rv" Dec 05 20:15:16 crc kubenswrapper[4828]: I1205 20:15:16.466196 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e81b5974-0c81-49f9-a27b-a7a725e526e5-catalog-content\") pod \"community-operators-lv7rv\" (UID: \"e81b5974-0c81-49f9-a27b-a7a725e526e5\") " pod="openshift-marketplace/community-operators-lv7rv" Dec 05 20:15:16 crc kubenswrapper[4828]: I1205 20:15:16.466250 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjl9k\" (UniqueName: \"kubernetes.io/projected/e81b5974-0c81-49f9-a27b-a7a725e526e5-kube-api-access-vjl9k\") pod \"community-operators-lv7rv\" (UID: \"e81b5974-0c81-49f9-a27b-a7a725e526e5\") " pod="openshift-marketplace/community-operators-lv7rv" Dec 05 20:15:16 crc kubenswrapper[4828]: I1205 20:15:16.568051 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e81b5974-0c81-49f9-a27b-a7a725e526e5-utilities\") pod \"community-operators-lv7rv\" (UID: \"e81b5974-0c81-49f9-a27b-a7a725e526e5\") " pod="openshift-marketplace/community-operators-lv7rv" Dec 05 20:15:16 crc kubenswrapper[4828]: I1205 20:15:16.568104 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e81b5974-0c81-49f9-a27b-a7a725e526e5-catalog-content\") pod \"community-operators-lv7rv\" (UID: \"e81b5974-0c81-49f9-a27b-a7a725e526e5\") " pod="openshift-marketplace/community-operators-lv7rv" Dec 05 20:15:16 crc kubenswrapper[4828]: I1205 20:15:16.568163 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjl9k\" (UniqueName: \"kubernetes.io/projected/e81b5974-0c81-49f9-a27b-a7a725e526e5-kube-api-access-vjl9k\") pod \"community-operators-lv7rv\" (UID: \"e81b5974-0c81-49f9-a27b-a7a725e526e5\") " pod="openshift-marketplace/community-operators-lv7rv" Dec 05 20:15:16 crc kubenswrapper[4828]: I1205 20:15:16.568960 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e81b5974-0c81-49f9-a27b-a7a725e526e5-utilities\") pod \"community-operators-lv7rv\" (UID: \"e81b5974-0c81-49f9-a27b-a7a725e526e5\") " pod="openshift-marketplace/community-operators-lv7rv" Dec 05 20:15:16 crc kubenswrapper[4828]: I1205 20:15:16.571294 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e81b5974-0c81-49f9-a27b-a7a725e526e5-catalog-content\") pod \"community-operators-lv7rv\" (UID: \"e81b5974-0c81-49f9-a27b-a7a725e526e5\") " pod="openshift-marketplace/community-operators-lv7rv" Dec 05 20:15:16 crc kubenswrapper[4828]: I1205 20:15:16.590943 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjl9k\" (UniqueName: \"kubernetes.io/projected/e81b5974-0c81-49f9-a27b-a7a725e526e5-kube-api-access-vjl9k\") pod \"community-operators-lv7rv\" (UID: \"e81b5974-0c81-49f9-a27b-a7a725e526e5\") " pod="openshift-marketplace/community-operators-lv7rv" Dec 05 20:15:16 crc kubenswrapper[4828]: I1205 20:15:16.603761 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lv7rv" Dec 05 20:15:17 crc kubenswrapper[4828]: I1205 20:15:17.170434 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lv7rv"] Dec 05 20:15:17 crc kubenswrapper[4828]: I1205 20:15:17.579930 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lv7rv" event={"ID":"e81b5974-0c81-49f9-a27b-a7a725e526e5","Type":"ContainerStarted","Data":"bd8b6f6b98f7d81a38e66b22e151bc19a02e3621efe5205f129e84f8b96fa5e5"} Dec 05 20:15:17 crc kubenswrapper[4828]: I1205 20:15:17.580247 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lv7rv" event={"ID":"e81b5974-0c81-49f9-a27b-a7a725e526e5","Type":"ContainerStarted","Data":"177492f1836150e28e71b8ce25241645a7ddea35604088d43ae7ae769a813b9e"} Dec 05 20:15:18 crc kubenswrapper[4828]: I1205 20:15:18.591806 4828 generic.go:334] "Generic (PLEG): container finished" podID="e81b5974-0c81-49f9-a27b-a7a725e526e5" containerID="bd8b6f6b98f7d81a38e66b22e151bc19a02e3621efe5205f129e84f8b96fa5e5" exitCode=0 Dec 05 20:15:18 crc kubenswrapper[4828]: I1205 20:15:18.592100 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lv7rv" event={"ID":"e81b5974-0c81-49f9-a27b-a7a725e526e5","Type":"ContainerDied","Data":"bd8b6f6b98f7d81a38e66b22e151bc19a02e3621efe5205f129e84f8b96fa5e5"} Dec 05 20:15:19 crc kubenswrapper[4828]: I1205 20:15:19.066771 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fkvx6"] Dec 05 20:15:19 crc kubenswrapper[4828]: I1205 20:15:19.077310 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fkvx6" Dec 05 20:15:19 crc kubenswrapper[4828]: I1205 20:15:19.097337 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fkvx6"] Dec 05 20:15:19 crc kubenswrapper[4828]: I1205 20:15:19.124627 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj4hf\" (UniqueName: \"kubernetes.io/projected/6bbe4d2c-0e99-4963-b66d-4e000155093c-kube-api-access-hj4hf\") pod \"redhat-marketplace-fkvx6\" (UID: \"6bbe4d2c-0e99-4963-b66d-4e000155093c\") " pod="openshift-marketplace/redhat-marketplace-fkvx6" Dec 05 20:15:19 crc kubenswrapper[4828]: I1205 20:15:19.124680 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bbe4d2c-0e99-4963-b66d-4e000155093c-catalog-content\") pod \"redhat-marketplace-fkvx6\" (UID: \"6bbe4d2c-0e99-4963-b66d-4e000155093c\") " pod="openshift-marketplace/redhat-marketplace-fkvx6" Dec 05 20:15:19 crc kubenswrapper[4828]: I1205 20:15:19.124740 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bbe4d2c-0e99-4963-b66d-4e000155093c-utilities\") pod \"redhat-marketplace-fkvx6\" (UID: \"6bbe4d2c-0e99-4963-b66d-4e000155093c\") " pod="openshift-marketplace/redhat-marketplace-fkvx6" Dec 05 20:15:19 crc kubenswrapper[4828]: I1205 20:15:19.226146 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj4hf\" (UniqueName: \"kubernetes.io/projected/6bbe4d2c-0e99-4963-b66d-4e000155093c-kube-api-access-hj4hf\") pod \"redhat-marketplace-fkvx6\" (UID: \"6bbe4d2c-0e99-4963-b66d-4e000155093c\") " pod="openshift-marketplace/redhat-marketplace-fkvx6" Dec 05 20:15:19 crc kubenswrapper[4828]: I1205 20:15:19.226208 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bbe4d2c-0e99-4963-b66d-4e000155093c-catalog-content\") pod \"redhat-marketplace-fkvx6\" (UID: \"6bbe4d2c-0e99-4963-b66d-4e000155093c\") " pod="openshift-marketplace/redhat-marketplace-fkvx6" Dec 05 20:15:19 crc kubenswrapper[4828]: I1205 20:15:19.226274 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bbe4d2c-0e99-4963-b66d-4e000155093c-utilities\") pod \"redhat-marketplace-fkvx6\" (UID: \"6bbe4d2c-0e99-4963-b66d-4e000155093c\") " pod="openshift-marketplace/redhat-marketplace-fkvx6" Dec 05 20:15:19 crc kubenswrapper[4828]: I1205 20:15:19.226862 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bbe4d2c-0e99-4963-b66d-4e000155093c-utilities\") pod \"redhat-marketplace-fkvx6\" (UID: \"6bbe4d2c-0e99-4963-b66d-4e000155093c\") " pod="openshift-marketplace/redhat-marketplace-fkvx6" Dec 05 20:15:19 crc kubenswrapper[4828]: I1205 20:15:19.227010 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bbe4d2c-0e99-4963-b66d-4e000155093c-catalog-content\") pod \"redhat-marketplace-fkvx6\" (UID: \"6bbe4d2c-0e99-4963-b66d-4e000155093c\") " pod="openshift-marketplace/redhat-marketplace-fkvx6" Dec 05 20:15:19 crc kubenswrapper[4828]: I1205 20:15:19.259647 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj4hf\" (UniqueName: \"kubernetes.io/projected/6bbe4d2c-0e99-4963-b66d-4e000155093c-kube-api-access-hj4hf\") pod \"redhat-marketplace-fkvx6\" (UID: \"6bbe4d2c-0e99-4963-b66d-4e000155093c\") " pod="openshift-marketplace/redhat-marketplace-fkvx6" Dec 05 20:15:19 crc kubenswrapper[4828]: I1205 20:15:19.419805 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fkvx6" Dec 05 20:15:19 crc kubenswrapper[4828]: I1205 20:15:19.446749 4828 scope.go:117] "RemoveContainer" containerID="669b952c3ede85f0975d95c6d6a646122e5087b1c3258c23ed6e56f0390472d9" Dec 05 20:15:19 crc kubenswrapper[4828]: E1205 20:15:19.447014 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:15:19 crc kubenswrapper[4828]: I1205 20:15:19.975537 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fkvx6"] Dec 05 20:15:20 crc kubenswrapper[4828]: I1205 20:15:20.617667 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fkvx6" event={"ID":"6bbe4d2c-0e99-4963-b66d-4e000155093c","Type":"ContainerStarted","Data":"ef9bb3dbe9840b1d074d0211584530ccf8cf8827ace548e80ff52966e3bfb1dd"} Dec 05 20:15:21 crc kubenswrapper[4828]: I1205 20:15:21.631624 4828 generic.go:334] "Generic (PLEG): container finished" podID="6bbe4d2c-0e99-4963-b66d-4e000155093c" containerID="d874424a52e3fb125305f9a0928fa0b98f6ea195697dde036f56bd7bb28d097e" exitCode=0 Dec 05 20:15:21 crc kubenswrapper[4828]: I1205 20:15:21.631979 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fkvx6" event={"ID":"6bbe4d2c-0e99-4963-b66d-4e000155093c","Type":"ContainerDied","Data":"d874424a52e3fb125305f9a0928fa0b98f6ea195697dde036f56bd7bb28d097e"} Dec 05 20:15:21 crc kubenswrapper[4828]: I1205 20:15:21.639118 4828 generic.go:334] "Generic (PLEG): container finished" podID="e81b5974-0c81-49f9-a27b-a7a725e526e5" containerID="7d1380ce961970af57c78776da487f9d7db410a354f7adbae25cdf081e974d3b" exitCode=0 Dec 05 20:15:21 crc kubenswrapper[4828]: I1205 20:15:21.639148 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lv7rv" event={"ID":"e81b5974-0c81-49f9-a27b-a7a725e526e5","Type":"ContainerDied","Data":"7d1380ce961970af57c78776da487f9d7db410a354f7adbae25cdf081e974d3b"} Dec 05 20:15:22 crc kubenswrapper[4828]: I1205 20:15:22.718537 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_93282807-6c59-42db-9235-8b2097a8f7a9/memcached/0.log" Dec 05 20:15:23 crc kubenswrapper[4828]: I1205 20:15:23.656497 4828 generic.go:334] "Generic (PLEG): container finished" podID="6bbe4d2c-0e99-4963-b66d-4e000155093c" containerID="098f1b7c0dff73829f7b4dba62c3393fcfc157ce0eb0b68f127e8f65bc55aa3a" exitCode=0 Dec 05 20:15:23 crc kubenswrapper[4828]: I1205 20:15:23.656604 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fkvx6" event={"ID":"6bbe4d2c-0e99-4963-b66d-4e000155093c","Type":"ContainerDied","Data":"098f1b7c0dff73829f7b4dba62c3393fcfc157ce0eb0b68f127e8f65bc55aa3a"} Dec 05 20:15:23 crc kubenswrapper[4828]: I1205 20:15:23.659373 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lv7rv" event={"ID":"e81b5974-0c81-49f9-a27b-a7a725e526e5","Type":"ContainerStarted","Data":"8a7960905e5a15ce85b3f561806158e611bc7c92eaf97d10f7da4e7a59c96234"} Dec 05 20:15:23 crc kubenswrapper[4828]: I1205 20:15:23.700972 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lv7rv" podStartSLOduration=4.045746412 podStartE2EDuration="7.700953129s" podCreationTimestamp="2025-12-05 20:15:16 +0000 UTC" firstStartedPulling="2025-12-05 20:15:18.597513395 +0000 UTC m=+4296.492735701" lastFinishedPulling="2025-12-05 20:15:22.252720122 +0000 UTC m=+4300.147942418" observedRunningTime="2025-12-05 20:15:23.696107848 +0000 UTC m=+4301.591330154" watchObservedRunningTime="2025-12-05 20:15:23.700953129 +0000 UTC m=+4301.596175435" Dec 05 20:15:24 crc kubenswrapper[4828]: I1205 20:15:24.670315 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fkvx6" event={"ID":"6bbe4d2c-0e99-4963-b66d-4e000155093c","Type":"ContainerStarted","Data":"33c27644db0734551a36abd0ae9d41934bca826dcea688345c67bc14c5242bdc"} Dec 05 20:15:24 crc kubenswrapper[4828]: I1205 20:15:24.691932 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fkvx6" podStartSLOduration=3.181537972 podStartE2EDuration="5.691913888s" podCreationTimestamp="2025-12-05 20:15:19 +0000 UTC" firstStartedPulling="2025-12-05 20:15:21.636985548 +0000 UTC m=+4299.532207854" lastFinishedPulling="2025-12-05 20:15:24.147361464 +0000 UTC m=+4302.042583770" observedRunningTime="2025-12-05 20:15:24.686698717 +0000 UTC m=+4302.581921023" watchObservedRunningTime="2025-12-05 20:15:24.691913888 +0000 UTC m=+4302.587136194" Dec 05 20:15:26 crc kubenswrapper[4828]: I1205 20:15:26.604139 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lv7rv" Dec 05 20:15:26 crc kubenswrapper[4828]: I1205 20:15:26.604535 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lv7rv" Dec 05 20:15:26 crc kubenswrapper[4828]: I1205 20:15:26.655919 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lv7rv" Dec 05 20:15:29 crc kubenswrapper[4828]: I1205 20:15:29.419931 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fkvx6" Dec 05 20:15:29 crc kubenswrapper[4828]: I1205 20:15:29.420293 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fkvx6" Dec 05 20:15:29 crc kubenswrapper[4828]: I1205 20:15:29.469137 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fkvx6" Dec 05 20:15:29 crc kubenswrapper[4828]: I1205 20:15:29.758219 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fkvx6" Dec 05 20:15:29 crc kubenswrapper[4828]: I1205 20:15:29.810118 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fkvx6"] Dec 05 20:15:31 crc kubenswrapper[4828]: I1205 20:15:31.730023 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fkvx6" podUID="6bbe4d2c-0e99-4963-b66d-4e000155093c" containerName="registry-server" containerID="cri-o://33c27644db0734551a36abd0ae9d41934bca826dcea688345c67bc14c5242bdc" gracePeriod=2 Dec 05 20:15:32 crc kubenswrapper[4828]: I1205 20:15:32.743017 4828 generic.go:334] "Generic (PLEG): container finished" podID="6bbe4d2c-0e99-4963-b66d-4e000155093c" containerID="33c27644db0734551a36abd0ae9d41934bca826dcea688345c67bc14c5242bdc" exitCode=0 Dec 05 20:15:32 crc kubenswrapper[4828]: I1205 20:15:32.743135 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fkvx6" event={"ID":"6bbe4d2c-0e99-4963-b66d-4e000155093c","Type":"ContainerDied","Data":"33c27644db0734551a36abd0ae9d41934bca826dcea688345c67bc14c5242bdc"} Dec 05 20:15:32 crc kubenswrapper[4828]: I1205 20:15:32.743615 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fkvx6" event={"ID":"6bbe4d2c-0e99-4963-b66d-4e000155093c","Type":"ContainerDied","Data":"ef9bb3dbe9840b1d074d0211584530ccf8cf8827ace548e80ff52966e3bfb1dd"} Dec 05 20:15:32 crc kubenswrapper[4828]: I1205 20:15:32.743685 4828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef9bb3dbe9840b1d074d0211584530ccf8cf8827ace548e80ff52966e3bfb1dd" Dec 05 20:15:32 crc kubenswrapper[4828]: I1205 20:15:32.816689 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fkvx6" Dec 05 20:15:32 crc kubenswrapper[4828]: I1205 20:15:32.898525 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj4hf\" (UniqueName: \"kubernetes.io/projected/6bbe4d2c-0e99-4963-b66d-4e000155093c-kube-api-access-hj4hf\") pod \"6bbe4d2c-0e99-4963-b66d-4e000155093c\" (UID: \"6bbe4d2c-0e99-4963-b66d-4e000155093c\") " Dec 05 20:15:32 crc kubenswrapper[4828]: I1205 20:15:32.901258 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bbe4d2c-0e99-4963-b66d-4e000155093c-utilities\") pod \"6bbe4d2c-0e99-4963-b66d-4e000155093c\" (UID: \"6bbe4d2c-0e99-4963-b66d-4e000155093c\") " Dec 05 20:15:32 crc kubenswrapper[4828]: I1205 20:15:32.901755 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bbe4d2c-0e99-4963-b66d-4e000155093c-catalog-content\") pod \"6bbe4d2c-0e99-4963-b66d-4e000155093c\" (UID: \"6bbe4d2c-0e99-4963-b66d-4e000155093c\") " Dec 05 20:15:32 crc kubenswrapper[4828]: I1205 20:15:32.902157 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bbe4d2c-0e99-4963-b66d-4e000155093c-utilities" (OuterVolumeSpecName: "utilities") pod "6bbe4d2c-0e99-4963-b66d-4e000155093c" (UID: "6bbe4d2c-0e99-4963-b66d-4e000155093c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 20:15:32 crc kubenswrapper[4828]: I1205 20:15:32.904286 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bbe4d2c-0e99-4963-b66d-4e000155093c-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 20:15:32 crc kubenswrapper[4828]: I1205 20:15:32.910087 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bbe4d2c-0e99-4963-b66d-4e000155093c-kube-api-access-hj4hf" (OuterVolumeSpecName: "kube-api-access-hj4hf") pod "6bbe4d2c-0e99-4963-b66d-4e000155093c" (UID: "6bbe4d2c-0e99-4963-b66d-4e000155093c"). InnerVolumeSpecName "kube-api-access-hj4hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 20:15:32 crc kubenswrapper[4828]: I1205 20:15:32.930100 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bbe4d2c-0e99-4963-b66d-4e000155093c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6bbe4d2c-0e99-4963-b66d-4e000155093c" (UID: "6bbe4d2c-0e99-4963-b66d-4e000155093c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 20:15:33 crc kubenswrapper[4828]: I1205 20:15:33.005847 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bbe4d2c-0e99-4963-b66d-4e000155093c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 20:15:33 crc kubenswrapper[4828]: I1205 20:15:33.005885 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj4hf\" (UniqueName: \"kubernetes.io/projected/6bbe4d2c-0e99-4963-b66d-4e000155093c-kube-api-access-hj4hf\") on node \"crc\" DevicePath \"\"" Dec 05 20:15:33 crc kubenswrapper[4828]: I1205 20:15:33.446583 4828 scope.go:117] "RemoveContainer" containerID="669b952c3ede85f0975d95c6d6a646122e5087b1c3258c23ed6e56f0390472d9" Dec 05 20:15:33 crc kubenswrapper[4828]: E1205 20:15:33.446976 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:15:33 crc kubenswrapper[4828]: I1205 20:15:33.753628 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fkvx6" Dec 05 20:15:33 crc kubenswrapper[4828]: I1205 20:15:33.787675 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fkvx6"] Dec 05 20:15:33 crc kubenswrapper[4828]: I1205 20:15:33.796149 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fkvx6"] Dec 05 20:15:34 crc kubenswrapper[4828]: I1205 20:15:34.458521 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bbe4d2c-0e99-4963-b66d-4e000155093c" path="/var/lib/kubelet/pods/6bbe4d2c-0e99-4963-b66d-4e000155093c/volumes" Dec 05 20:15:35 crc kubenswrapper[4828]: I1205 20:15:35.259784 4828 patch_prober.go:28] interesting pod/machine-config-daemon-nlqsv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 20:15:35 crc kubenswrapper[4828]: I1205 20:15:35.259899 4828 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 20:15:35 crc kubenswrapper[4828]: I1205 20:15:35.259947 4828 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" Dec 05 20:15:35 crc kubenswrapper[4828]: I1205 20:15:35.260648 4828 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"09ef2c76d82d57f95dccc1440fbb7d812fad2288311e423396f4916955e3011c"} pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 20:15:35 crc kubenswrapper[4828]: I1205 20:15:35.260704 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerName="machine-config-daemon" containerID="cri-o://09ef2c76d82d57f95dccc1440fbb7d812fad2288311e423396f4916955e3011c" gracePeriod=600 Dec 05 20:15:35 crc kubenswrapper[4828]: E1205 20:15:35.402327 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:15:35 crc kubenswrapper[4828]: I1205 20:15:35.498353 4828 scope.go:117] "RemoveContainer" containerID="7ef5599f084d42eb8f185c7f070e44c8b8cec70e0fba9aaff5d1ed28c881ce7d" Dec 05 20:15:35 crc kubenswrapper[4828]: I1205 20:15:35.771469 4828 generic.go:334] "Generic (PLEG): container finished" podID="a74199c1-79be-49b4-9c04-fdb48847c85e" containerID="09ef2c76d82d57f95dccc1440fbb7d812fad2288311e423396f4916955e3011c" exitCode=0 Dec 05 20:15:35 crc kubenswrapper[4828]: I1205 20:15:35.771507 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerDied","Data":"09ef2c76d82d57f95dccc1440fbb7d812fad2288311e423396f4916955e3011c"} Dec 05 20:15:35 crc kubenswrapper[4828]: I1205 20:15:35.771536 4828 scope.go:117] "RemoveContainer" containerID="d68143ac91ffa442f6544d88d704ba189350bba12267b53e7e8314035ce28693" Dec 05 20:15:35 crc kubenswrapper[4828]: I1205 20:15:35.772078 4828 scope.go:117] "RemoveContainer" containerID="09ef2c76d82d57f95dccc1440fbb7d812fad2288311e423396f4916955e3011c" Dec 05 20:15:35 crc kubenswrapper[4828]: E1205 20:15:35.772384 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:15:36 crc kubenswrapper[4828]: I1205 20:15:36.669085 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lv7rv" Dec 05 20:15:36 crc kubenswrapper[4828]: I1205 20:15:36.724669 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lv7rv"] Dec 05 20:15:36 crc kubenswrapper[4828]: I1205 20:15:36.783236 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lv7rv" podUID="e81b5974-0c81-49f9-a27b-a7a725e526e5" containerName="registry-server" containerID="cri-o://8a7960905e5a15ce85b3f561806158e611bc7c92eaf97d10f7da4e7a59c96234" gracePeriod=2 Dec 05 20:15:37 crc kubenswrapper[4828]: I1205 20:15:37.264477 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lv7rv" Dec 05 20:15:37 crc kubenswrapper[4828]: I1205 20:15:37.387146 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e81b5974-0c81-49f9-a27b-a7a725e526e5-utilities\") pod \"e81b5974-0c81-49f9-a27b-a7a725e526e5\" (UID: \"e81b5974-0c81-49f9-a27b-a7a725e526e5\") " Dec 05 20:15:37 crc kubenswrapper[4828]: I1205 20:15:37.387365 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e81b5974-0c81-49f9-a27b-a7a725e526e5-catalog-content\") pod \"e81b5974-0c81-49f9-a27b-a7a725e526e5\" (UID: \"e81b5974-0c81-49f9-a27b-a7a725e526e5\") " Dec 05 20:15:37 crc kubenswrapper[4828]: I1205 20:15:37.387492 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjl9k\" (UniqueName: \"kubernetes.io/projected/e81b5974-0c81-49f9-a27b-a7a725e526e5-kube-api-access-vjl9k\") pod \"e81b5974-0c81-49f9-a27b-a7a725e526e5\" (UID: \"e81b5974-0c81-49f9-a27b-a7a725e526e5\") " Dec 05 20:15:37 crc kubenswrapper[4828]: I1205 20:15:37.388114 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e81b5974-0c81-49f9-a27b-a7a725e526e5-utilities" (OuterVolumeSpecName: "utilities") pod "e81b5974-0c81-49f9-a27b-a7a725e526e5" (UID: "e81b5974-0c81-49f9-a27b-a7a725e526e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 20:15:37 crc kubenswrapper[4828]: I1205 20:15:37.396978 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e81b5974-0c81-49f9-a27b-a7a725e526e5-kube-api-access-vjl9k" (OuterVolumeSpecName: "kube-api-access-vjl9k") pod "e81b5974-0c81-49f9-a27b-a7a725e526e5" (UID: "e81b5974-0c81-49f9-a27b-a7a725e526e5"). InnerVolumeSpecName "kube-api-access-vjl9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 20:15:37 crc kubenswrapper[4828]: I1205 20:15:37.443906 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e81b5974-0c81-49f9-a27b-a7a725e526e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e81b5974-0c81-49f9-a27b-a7a725e526e5" (UID: "e81b5974-0c81-49f9-a27b-a7a725e526e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 20:15:37 crc kubenswrapper[4828]: I1205 20:15:37.489419 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e81b5974-0c81-49f9-a27b-a7a725e526e5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 20:15:37 crc kubenswrapper[4828]: I1205 20:15:37.489751 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjl9k\" (UniqueName: \"kubernetes.io/projected/e81b5974-0c81-49f9-a27b-a7a725e526e5-kube-api-access-vjl9k\") on node \"crc\" DevicePath \"\"" Dec 05 20:15:37 crc kubenswrapper[4828]: I1205 20:15:37.489855 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e81b5974-0c81-49f9-a27b-a7a725e526e5-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 20:15:37 crc kubenswrapper[4828]: I1205 20:15:37.794381 4828 generic.go:334] "Generic (PLEG): container finished" podID="e81b5974-0c81-49f9-a27b-a7a725e526e5" containerID="8a7960905e5a15ce85b3f561806158e611bc7c92eaf97d10f7da4e7a59c96234" exitCode=0 Dec 05 20:15:37 crc kubenswrapper[4828]: I1205 20:15:37.794469 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lv7rv" event={"ID":"e81b5974-0c81-49f9-a27b-a7a725e526e5","Type":"ContainerDied","Data":"8a7960905e5a15ce85b3f561806158e611bc7c92eaf97d10f7da4e7a59c96234"} Dec 05 20:15:37 crc kubenswrapper[4828]: I1205 20:15:37.794532 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lv7rv" Dec 05 20:15:37 crc kubenswrapper[4828]: I1205 20:15:37.796031 4828 scope.go:117] "RemoveContainer" containerID="8a7960905e5a15ce85b3f561806158e611bc7c92eaf97d10f7da4e7a59c96234" Dec 05 20:15:37 crc kubenswrapper[4828]: I1205 20:15:37.795938 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lv7rv" event={"ID":"e81b5974-0c81-49f9-a27b-a7a725e526e5","Type":"ContainerDied","Data":"177492f1836150e28e71b8ce25241645a7ddea35604088d43ae7ae769a813b9e"} Dec 05 20:15:37 crc kubenswrapper[4828]: I1205 20:15:37.817282 4828 scope.go:117] "RemoveContainer" containerID="7d1380ce961970af57c78776da487f9d7db410a354f7adbae25cdf081e974d3b" Dec 05 20:15:37 crc kubenswrapper[4828]: I1205 20:15:37.835844 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lv7rv"] Dec 05 20:15:37 crc kubenswrapper[4828]: I1205 20:15:37.847250 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lv7rv"] Dec 05 20:15:37 crc kubenswrapper[4828]: I1205 20:15:37.853672 4828 scope.go:117] "RemoveContainer" containerID="bd8b6f6b98f7d81a38e66b22e151bc19a02e3621efe5205f129e84f8b96fa5e5" Dec 05 20:15:37 crc kubenswrapper[4828]: I1205 20:15:37.891333 4828 scope.go:117] "RemoveContainer" containerID="8a7960905e5a15ce85b3f561806158e611bc7c92eaf97d10f7da4e7a59c96234" Dec 05 20:15:37 crc kubenswrapper[4828]: E1205 20:15:37.892071 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a7960905e5a15ce85b3f561806158e611bc7c92eaf97d10f7da4e7a59c96234\": container with ID starting with 8a7960905e5a15ce85b3f561806158e611bc7c92eaf97d10f7da4e7a59c96234 not found: ID does not exist" containerID="8a7960905e5a15ce85b3f561806158e611bc7c92eaf97d10f7da4e7a59c96234" Dec 05 20:15:37 crc kubenswrapper[4828]: I1205 20:15:37.892189 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a7960905e5a15ce85b3f561806158e611bc7c92eaf97d10f7da4e7a59c96234"} err="failed to get container status \"8a7960905e5a15ce85b3f561806158e611bc7c92eaf97d10f7da4e7a59c96234\": rpc error: code = NotFound desc = could not find container \"8a7960905e5a15ce85b3f561806158e611bc7c92eaf97d10f7da4e7a59c96234\": container with ID starting with 8a7960905e5a15ce85b3f561806158e611bc7c92eaf97d10f7da4e7a59c96234 not found: ID does not exist" Dec 05 20:15:37 crc kubenswrapper[4828]: I1205 20:15:37.892303 4828 scope.go:117] "RemoveContainer" containerID="7d1380ce961970af57c78776da487f9d7db410a354f7adbae25cdf081e974d3b" Dec 05 20:15:37 crc kubenswrapper[4828]: E1205 20:15:37.892673 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d1380ce961970af57c78776da487f9d7db410a354f7adbae25cdf081e974d3b\": container with ID starting with 7d1380ce961970af57c78776da487f9d7db410a354f7adbae25cdf081e974d3b not found: ID does not exist" containerID="7d1380ce961970af57c78776da487f9d7db410a354f7adbae25cdf081e974d3b" Dec 05 20:15:37 crc kubenswrapper[4828]: I1205 20:15:37.892700 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d1380ce961970af57c78776da487f9d7db410a354f7adbae25cdf081e974d3b"} err="failed to get container status \"7d1380ce961970af57c78776da487f9d7db410a354f7adbae25cdf081e974d3b\": rpc error: code = NotFound desc = could not find container \"7d1380ce961970af57c78776da487f9d7db410a354f7adbae25cdf081e974d3b\": container with ID starting with 7d1380ce961970af57c78776da487f9d7db410a354f7adbae25cdf081e974d3b not found: ID does not exist" Dec 05 20:15:37 crc kubenswrapper[4828]: I1205 20:15:37.892717 4828 scope.go:117] "RemoveContainer" containerID="bd8b6f6b98f7d81a38e66b22e151bc19a02e3621efe5205f129e84f8b96fa5e5" Dec 05 20:15:37 crc kubenswrapper[4828]: E1205 20:15:37.893038 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd8b6f6b98f7d81a38e66b22e151bc19a02e3621efe5205f129e84f8b96fa5e5\": container with ID starting with bd8b6f6b98f7d81a38e66b22e151bc19a02e3621efe5205f129e84f8b96fa5e5 not found: ID does not exist" containerID="bd8b6f6b98f7d81a38e66b22e151bc19a02e3621efe5205f129e84f8b96fa5e5" Dec 05 20:15:37 crc kubenswrapper[4828]: I1205 20:15:37.893072 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd8b6f6b98f7d81a38e66b22e151bc19a02e3621efe5205f129e84f8b96fa5e5"} err="failed to get container status \"bd8b6f6b98f7d81a38e66b22e151bc19a02e3621efe5205f129e84f8b96fa5e5\": rpc error: code = NotFound desc = could not find container \"bd8b6f6b98f7d81a38e66b22e151bc19a02e3621efe5205f129e84f8b96fa5e5\": container with ID starting with bd8b6f6b98f7d81a38e66b22e151bc19a02e3621efe5205f129e84f8b96fa5e5 not found: ID does not exist" Dec 05 20:15:38 crc kubenswrapper[4828]: I1205 20:15:38.458270 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e81b5974-0c81-49f9-a27b-a7a725e526e5" path="/var/lib/kubelet/pods/e81b5974-0c81-49f9-a27b-a7a725e526e5/volumes" Dec 05 20:15:41 crc kubenswrapper[4828]: I1205 20:15:41.952947 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97_1363220d-423b-45a5-a067-559b8a36f610/util/0.log" Dec 05 20:15:42 crc kubenswrapper[4828]: I1205 20:15:42.121697 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97_1363220d-423b-45a5-a067-559b8a36f610/util/0.log" Dec 05 20:15:42 crc kubenswrapper[4828]: I1205 20:15:42.140240 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97_1363220d-423b-45a5-a067-559b8a36f610/pull/0.log" Dec 05 20:15:42 crc kubenswrapper[4828]: I1205 20:15:42.214193 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97_1363220d-423b-45a5-a067-559b8a36f610/pull/0.log" Dec 05 20:15:42 crc kubenswrapper[4828]: I1205 20:15:42.272232 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97_1363220d-423b-45a5-a067-559b8a36f610/util/0.log" Dec 05 20:15:42 crc kubenswrapper[4828]: I1205 20:15:42.329275 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97_1363220d-423b-45a5-a067-559b8a36f610/pull/0.log" Dec 05 20:15:42 crc kubenswrapper[4828]: I1205 20:15:42.330278 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b48a4eee867a5e9e8f227db42b03d11ad2b91551ccf51bfe0dde73285jdm97_1363220d-423b-45a5-a067-559b8a36f610/extract/0.log" Dec 05 20:15:42 crc kubenswrapper[4828]: I1205 20:15:42.511266 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-jbg6n_1f1ef15a-9832-4ee5-8077-066329f6180a/kube-rbac-proxy/0.log" Dec 05 20:15:42 crc kubenswrapper[4828]: I1205 20:15:42.555086 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-jbg6n_1f1ef15a-9832-4ee5-8077-066329f6180a/manager/0.log" Dec 05 20:15:42 crc kubenswrapper[4828]: I1205 20:15:42.651609 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-qftqg_4276bd34-acab-4936-a044-7d00e33e806f/kube-rbac-proxy/0.log" Dec 05 20:15:42 crc kubenswrapper[4828]: I1205 20:15:42.710239 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-qftqg_4276bd34-acab-4936-a044-7d00e33e806f/manager/0.log" Dec 05 20:15:42 crc kubenswrapper[4828]: I1205 20:15:42.768284 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-cr94b_16bfe264-a5d1-433e-93ee-c6821e882c4c/kube-rbac-proxy/0.log" Dec 05 20:15:42 crc kubenswrapper[4828]: I1205 20:15:42.936200 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-cr94b_16bfe264-a5d1-433e-93ee-c6821e882c4c/manager/0.log" Dec 05 20:15:42 crc kubenswrapper[4828]: I1205 20:15:42.941146 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-v92pz_a27719f3-1ce1-4a2b-876f-f280966f8e8c/kube-rbac-proxy/0.log" Dec 05 20:15:43 crc kubenswrapper[4828]: I1205 20:15:43.030762 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-v92pz_a27719f3-1ce1-4a2b-876f-f280966f8e8c/manager/0.log" Dec 05 20:15:43 crc kubenswrapper[4828]: I1205 20:15:43.125351 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-k7qf5_f5bca056-89ff-4e36-82b7-ad44d9dc00d6/kube-rbac-proxy/0.log" Dec 05 20:15:43 crc kubenswrapper[4828]: I1205 20:15:43.172249 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-k7qf5_f5bca056-89ff-4e36-82b7-ad44d9dc00d6/manager/0.log" Dec 05 20:15:43 crc kubenswrapper[4828]: I1205 20:15:43.293112 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-g2wd4_7dbe4cda-8493-4e63-9544-7dfff2495c65/kube-rbac-proxy/0.log" Dec 05 20:15:43 crc kubenswrapper[4828]: I1205 20:15:43.323344 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-g2wd4_7dbe4cda-8493-4e63-9544-7dfff2495c65/manager/0.log" Dec 05 20:15:43 crc kubenswrapper[4828]: I1205 20:15:43.497606 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-575477cdfc-lrhm5_03c4fc5d-6be1-47b4-9c39-7bb86046dafd/kube-rbac-proxy/0.log" Dec 05 20:15:43 crc kubenswrapper[4828]: I1205 20:15:43.524231 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-575477cdfc-lrhm5_03c4fc5d-6be1-47b4-9c39-7bb86046dafd/manager/10.log" Dec 05 20:15:43 crc kubenswrapper[4828]: I1205 20:15:43.587122 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-575477cdfc-lrhm5_03c4fc5d-6be1-47b4-9c39-7bb86046dafd/manager/10.log" Dec 05 20:15:43 crc kubenswrapper[4828]: I1205 20:15:43.706228 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-9jbwm_04671bff-8616-471f-bd46-21e6b17227eb/kube-rbac-proxy/0.log" Dec 05 20:15:43 crc kubenswrapper[4828]: I1205 20:15:43.740006 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-9jbwm_04671bff-8616-471f-bd46-21e6b17227eb/manager/0.log" Dec 05 20:15:43 crc kubenswrapper[4828]: I1205 20:15:43.931049 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-pbmt2_1e335b54-f84b-4d91-a58e-0348728d171e/kube-rbac-proxy/0.log" Dec 05 20:15:43 crc kubenswrapper[4828]: I1205 20:15:43.991901 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-pbmt2_1e335b54-f84b-4d91-a58e-0348728d171e/manager/0.log" Dec 05 20:15:44 crc kubenswrapper[4828]: I1205 20:15:44.014967 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-cgkv6_3b18c18d-624d-4d50-95ba-a4f755f74936/kube-rbac-proxy/0.log" Dec 05 20:15:44 crc kubenswrapper[4828]: I1205 20:15:44.107356 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-cgkv6_3b18c18d-624d-4d50-95ba-a4f755f74936/manager/0.log" Dec 05 20:15:44 crc kubenswrapper[4828]: I1205 20:15:44.214260 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-77twz_c6d11d68-9609-432a-a855-4789df83739d/manager/0.log" Dec 05 20:15:44 crc kubenswrapper[4828]: I1205 20:15:44.259492 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-77twz_c6d11d68-9609-432a-a855-4789df83739d/kube-rbac-proxy/0.log" Dec 05 20:15:44 crc kubenswrapper[4828]: I1205 20:15:44.392770 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-gsczh_a03904e7-57be-4491-b11d-c8e698b718e6/kube-rbac-proxy/0.log" Dec 05 20:15:44 crc kubenswrapper[4828]: I1205 20:15:44.434517 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-gsczh_a03904e7-57be-4491-b11d-c8e698b718e6/manager/0.log" Dec 05 20:15:44 crc kubenswrapper[4828]: I1205 20:15:44.523624 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-l6gtp_757d5884-94d5-45f1-ae2c-49fd93ce512c/kube-rbac-proxy/0.log" Dec 05 20:15:44 crc kubenswrapper[4828]: I1205 20:15:44.625730 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-l6gtp_757d5884-94d5-45f1-ae2c-49fd93ce512c/manager/0.log" Dec 05 20:15:44 crc kubenswrapper[4828]: I1205 20:15:44.629869 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-cfnbh_a5d6b211-6f88-45fe-8e38-608271465dfe/kube-rbac-proxy/0.log" Dec 05 20:15:44 crc kubenswrapper[4828]: I1205 20:15:44.706290 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-cfnbh_a5d6b211-6f88-45fe-8e38-608271465dfe/manager/0.log" Dec 05 20:15:44 crc kubenswrapper[4828]: I1205 20:15:44.876359 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j_1ce74c6c-ee96-4712-983f-4090e176f31e/manager/0.log" Dec 05 20:15:45 crc kubenswrapper[4828]: I1205 20:15:45.013522 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4c7l4j_1ce74c6c-ee96-4712-983f-4090e176f31e/kube-rbac-proxy/0.log" Dec 05 20:15:45 crc kubenswrapper[4828]: I1205 20:15:45.225666 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-mn9b4_850c5dc4-1658-4c59-96eb-999fb7392164/registry-server/0.log" Dec 05 20:15:45 crc kubenswrapper[4828]: I1205 20:15:45.472402 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-4gr5g_ba58375c-b3fa-4eb8-8813-c55f003674ca/kube-rbac-proxy/0.log" Dec 05 20:15:45 crc kubenswrapper[4828]: I1205 20:15:45.615985 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-4gr5g_ba58375c-b3fa-4eb8-8813-c55f003674ca/manager/0.log" Dec 05 20:15:45 crc kubenswrapper[4828]: I1205 20:15:45.672142 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-cf5gg_cd2986fb-f299-446c-85b7-28427df0ca51/kube-rbac-proxy/0.log" Dec 05 20:15:45 crc kubenswrapper[4828]: I1205 20:15:45.684356 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-56d574f77c-99sf5_4ceee1c7-178c-4496-9cdd-c302d5180aca/operator/0.log" Dec 05 20:15:45 crc kubenswrapper[4828]: I1205 20:15:45.857744 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-cf5gg_cd2986fb-f299-446c-85b7-28427df0ca51/manager/0.log" Dec 05 20:15:45 crc kubenswrapper[4828]: I1205 20:15:45.947631 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-w4rgq_dabc71c3-947a-4d4c-90bd-b5bb473ce013/operator/0.log" Dec 05 20:15:46 crc kubenswrapper[4828]: I1205 20:15:46.066763 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-6xg2c_48135908-b8f6-47ab-aeb7-3f74bb3e2cde/kube-rbac-proxy/0.log" Dec 05 20:15:46 crc kubenswrapper[4828]: I1205 20:15:46.164769 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-6xg2c_48135908-b8f6-47ab-aeb7-3f74bb3e2cde/manager/0.log" Dec 05 20:15:46 crc kubenswrapper[4828]: I1205 20:15:46.267266 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-hdxm9_8c1110f4-40af-416e-9624-22a901897000/kube-rbac-proxy/0.log" Dec 05 20:15:46 crc kubenswrapper[4828]: I1205 20:15:46.290607 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-d5958f94b-76zjx_408ecf49-524f-4743-9cef-5c65877dd176/manager/0.log" Dec 05 20:15:46 crc kubenswrapper[4828]: I1205 20:15:46.409218 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-hdxm9_8c1110f4-40af-416e-9624-22a901897000/manager/0.log" Dec 05 20:15:46 crc kubenswrapper[4828]: I1205 20:15:46.434083 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-h2d97_13474ecf-c76e-400f-bc72-70c11ab8356b/kube-rbac-proxy/0.log" Dec 05 20:15:46 crc kubenswrapper[4828]: I1205 20:15:46.463335 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-h2d97_13474ecf-c76e-400f-bc72-70c11ab8356b/manager/0.log" Dec 05 20:15:46 crc kubenswrapper[4828]: I1205 20:15:46.616033 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-qdslg_bf305ed3-e27f-42bc-9fb7-bec903ca820f/kube-rbac-proxy/0.log" Dec 05 20:15:46 crc kubenswrapper[4828]: I1205 20:15:46.629771 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-qdslg_bf305ed3-e27f-42bc-9fb7-bec903ca820f/manager/0.log" Dec 05 20:15:48 crc kubenswrapper[4828]: I1205 20:15:48.447289 4828 scope.go:117] "RemoveContainer" containerID="669b952c3ede85f0975d95c6d6a646122e5087b1c3258c23ed6e56f0390472d9" Dec 05 20:15:48 crc kubenswrapper[4828]: E1205 20:15:48.447978 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:15:50 crc kubenswrapper[4828]: I1205 20:15:50.446680 4828 scope.go:117] "RemoveContainer" containerID="09ef2c76d82d57f95dccc1440fbb7d812fad2288311e423396f4916955e3011c" Dec 05 20:15:50 crc kubenswrapper[4828]: E1205 20:15:50.447161 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:16:00 crc kubenswrapper[4828]: I1205 20:16:00.447206 4828 scope.go:117] "RemoveContainer" containerID="669b952c3ede85f0975d95c6d6a646122e5087b1c3258c23ed6e56f0390472d9" Dec 05 20:16:00 crc kubenswrapper[4828]: E1205 20:16:00.448169 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:16:01 crc kubenswrapper[4828]: I1205 20:16:01.446403 4828 scope.go:117] "RemoveContainer" containerID="09ef2c76d82d57f95dccc1440fbb7d812fad2288311e423396f4916955e3011c" Dec 05 20:16:01 crc kubenswrapper[4828]: E1205 20:16:01.446873 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:16:09 crc kubenswrapper[4828]: I1205 20:16:09.012261 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vbgcx_e5365032-f31f-4e90-bb94-193e5d6dcc9f/kube-rbac-proxy/0.log" Dec 05 20:16:09 crc kubenswrapper[4828]: I1205 20:16:09.035175 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-kn6kp_6ad1915a-9298-4aba-928b-5d3c7d57a7bb/control-plane-machine-set-operator/0.log" Dec 05 20:16:09 crc kubenswrapper[4828]: I1205 20:16:09.134676 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vbgcx_e5365032-f31f-4e90-bb94-193e5d6dcc9f/machine-api-operator/0.log" Dec 05 20:16:11 crc kubenswrapper[4828]: I1205 20:16:11.446660 4828 scope.go:117] "RemoveContainer" containerID="669b952c3ede85f0975d95c6d6a646122e5087b1c3258c23ed6e56f0390472d9" Dec 05 20:16:11 crc kubenswrapper[4828]: E1205 20:16:11.447392 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:16:15 crc kubenswrapper[4828]: I1205 20:16:15.448520 4828 scope.go:117] "RemoveContainer" containerID="09ef2c76d82d57f95dccc1440fbb7d812fad2288311e423396f4916955e3011c" Dec 05 20:16:15 crc kubenswrapper[4828]: E1205 20:16:15.449162 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:16:23 crc kubenswrapper[4828]: I1205 20:16:23.509515 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-49xx5_c8cac37c-e093-48f2-b1da-eaf62bf95bfd/cert-manager-cainjector/0.log" Dec 05 20:16:23 crc kubenswrapper[4828]: I1205 20:16:23.533532 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-7srhj_3d6b347b-b532-43b5-b0d4-8c40b7962156/cert-manager-controller/0.log" Dec 05 20:16:23 crc kubenswrapper[4828]: I1205 20:16:23.680480 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-n596v_2fcb84c0-5e6a-45ee-9c06-f0a12a1ef15b/cert-manager-webhook/0.log" Dec 05 20:16:26 crc kubenswrapper[4828]: I1205 20:16:26.446399 4828 scope.go:117] "RemoveContainer" containerID="669b952c3ede85f0975d95c6d6a646122e5087b1c3258c23ed6e56f0390472d9" Dec 05 20:16:26 crc kubenswrapper[4828]: E1205 20:16:26.448093 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:16:28 crc kubenswrapper[4828]: I1205 20:16:28.447256 4828 scope.go:117] "RemoveContainer" containerID="09ef2c76d82d57f95dccc1440fbb7d812fad2288311e423396f4916955e3011c" Dec 05 20:16:28 crc kubenswrapper[4828]: E1205 20:16:28.448022 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:16:36 crc kubenswrapper[4828]: I1205 20:16:36.381713 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-hbrt4_96df2436-0a55-4b21-900b-dfedbafa290d/nmstate-console-plugin/0.log" Dec 05 20:16:36 crc kubenswrapper[4828]: I1205 20:16:36.577443 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-lmln5_7bd0a6fb-88d4-4e0a-9ac1-d9334b2f91b8/nmstate-handler/0.log" Dec 05 20:16:36 crc kubenswrapper[4828]: I1205 20:16:36.578604 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-jzb4h_5e07f179-6cb8-4771-894c-7ad6c2ee6b10/kube-rbac-proxy/0.log" Dec 05 20:16:36 crc kubenswrapper[4828]: I1205 20:16:36.638728 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-jzb4h_5e07f179-6cb8-4771-894c-7ad6c2ee6b10/nmstate-metrics/0.log" Dec 05 20:16:36 crc kubenswrapper[4828]: I1205 20:16:36.793767 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-2zdc5_1691d52d-868b-4121-8863-2a59db739b1b/nmstate-operator/0.log" Dec 05 20:16:36 crc kubenswrapper[4828]: I1205 20:16:36.849367 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-j5ps8_c535354b-ac85-4a30-9f7d-1547f2db8fbc/nmstate-webhook/0.log" Dec 05 20:16:40 crc kubenswrapper[4828]: I1205 20:16:40.446424 4828 scope.go:117] "RemoveContainer" containerID="09ef2c76d82d57f95dccc1440fbb7d812fad2288311e423396f4916955e3011c" Dec 05 20:16:40 crc kubenswrapper[4828]: E1205 20:16:40.447143 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:16:41 crc kubenswrapper[4828]: I1205 20:16:41.446767 4828 scope.go:117] "RemoveContainer" containerID="669b952c3ede85f0975d95c6d6a646122e5087b1c3258c23ed6e56f0390472d9" Dec 05 20:16:41 crc kubenswrapper[4828]: E1205 20:16:41.447154 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:16:51 crc kubenswrapper[4828]: I1205 20:16:51.446987 4828 scope.go:117] "RemoveContainer" containerID="09ef2c76d82d57f95dccc1440fbb7d812fad2288311e423396f4916955e3011c" Dec 05 20:16:51 crc kubenswrapper[4828]: E1205 20:16:51.450867 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:16:52 crc kubenswrapper[4828]: I1205 20:16:52.984143 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-t5wnc_35cb1f63-dbf8-4451-adff-4b35840e5498/kube-rbac-proxy/0.log" Dec 05 20:16:53 crc kubenswrapper[4828]: I1205 20:16:53.050563 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-t5wnc_35cb1f63-dbf8-4451-adff-4b35840e5498/controller/0.log" Dec 05 20:16:53 crc kubenswrapper[4828]: I1205 20:16:53.139794 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/cp-frr-files/0.log" Dec 05 20:16:53 crc kubenswrapper[4828]: I1205 20:16:53.330509 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/cp-reloader/0.log" Dec 05 20:16:53 crc kubenswrapper[4828]: I1205 20:16:53.337862 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/cp-frr-files/0.log" Dec 05 20:16:53 crc kubenswrapper[4828]: I1205 20:16:53.356078 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/cp-reloader/0.log" Dec 05 20:16:53 crc kubenswrapper[4828]: I1205 20:16:53.370285 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/cp-metrics/0.log" Dec 05 20:16:53 crc kubenswrapper[4828]: I1205 20:16:53.446746 4828 scope.go:117] "RemoveContainer" containerID="669b952c3ede85f0975d95c6d6a646122e5087b1c3258c23ed6e56f0390472d9" Dec 05 20:16:53 crc kubenswrapper[4828]: E1205 20:16:53.447068 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:16:53 crc kubenswrapper[4828]: I1205 20:16:53.506127 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/cp-frr-files/0.log" Dec 05 20:16:53 crc kubenswrapper[4828]: I1205 20:16:53.561637 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/cp-metrics/0.log" Dec 05 20:16:53 crc kubenswrapper[4828]: I1205 20:16:53.563081 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/cp-reloader/0.log" Dec 05 20:16:53 crc kubenswrapper[4828]: I1205 20:16:53.592520 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/cp-metrics/0.log" Dec 05 20:16:53 crc kubenswrapper[4828]: I1205 20:16:53.706645 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/cp-frr-files/0.log" Dec 05 20:16:53 crc kubenswrapper[4828]: I1205 20:16:53.728588 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/cp-reloader/0.log" Dec 05 20:16:53 crc kubenswrapper[4828]: I1205 20:16:53.730374 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/cp-metrics/0.log" Dec 05 20:16:53 crc kubenswrapper[4828]: I1205 20:16:53.753604 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/controller/0.log" Dec 05 20:16:53 crc kubenswrapper[4828]: I1205 20:16:53.949175 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/frr-metrics/0.log" Dec 05 20:16:53 crc kubenswrapper[4828]: I1205 20:16:53.955182 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/kube-rbac-proxy-frr/0.log" Dec 05 20:16:53 crc kubenswrapper[4828]: I1205 20:16:53.968087 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/kube-rbac-proxy/0.log" Dec 05 20:16:54 crc kubenswrapper[4828]: I1205 20:16:54.536086 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-w8vp2_67e4c769-0905-4c8f-8fc0-2488346fe188/frr-k8s-webhook-server/0.log" Dec 05 20:16:54 crc kubenswrapper[4828]: I1205 20:16:54.579461 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/reloader/0.log" Dec 05 20:16:54 crc kubenswrapper[4828]: I1205 20:16:54.840255 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-cccfd6bcb-v7d55_b8cd7c76-c03d-4f9a-9e3e-d982c39d92c2/manager/0.log" Dec 05 20:16:54 crc kubenswrapper[4828]: I1205 20:16:54.898136 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-8f476869-jcl7n_88807e96-2cf3-4ab4-863d-48538fac8bc8/webhook-server/0.log" Dec 05 20:16:55 crc kubenswrapper[4828]: I1205 20:16:55.080322 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gqq4l_8a121072-6f44-4a42-b9b1-a54d8d04fea4/kube-rbac-proxy/0.log" Dec 05 20:16:55 crc kubenswrapper[4828]: I1205 20:16:55.343759 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-p2wtq_c24da36a-69fc-4337-87e0-4a1cc34090ff/frr/0.log" Dec 05 20:16:55 crc kubenswrapper[4828]: I1205 20:16:55.510842 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gqq4l_8a121072-6f44-4a42-b9b1-a54d8d04fea4/speaker/0.log" Dec 05 20:17:06 crc kubenswrapper[4828]: I1205 20:17:06.452994 4828 scope.go:117] "RemoveContainer" containerID="09ef2c76d82d57f95dccc1440fbb7d812fad2288311e423396f4916955e3011c" Dec 05 20:17:06 crc kubenswrapper[4828]: E1205 20:17:06.453998 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:17:06 crc kubenswrapper[4828]: I1205 20:17:06.454670 4828 scope.go:117] "RemoveContainer" containerID="669b952c3ede85f0975d95c6d6a646122e5087b1c3258c23ed6e56f0390472d9" Dec 05 20:17:06 crc kubenswrapper[4828]: E1205 20:17:06.454891 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:17:07 crc kubenswrapper[4828]: I1205 20:17:07.118884 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj_800451e0-a385-4d99-ab2d-706b98d39f8d/util/0.log" Dec 05 20:17:07 crc kubenswrapper[4828]: I1205 20:17:07.297835 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj_800451e0-a385-4d99-ab2d-706b98d39f8d/util/0.log" Dec 05 20:17:07 crc kubenswrapper[4828]: I1205 20:17:07.323377 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj_800451e0-a385-4d99-ab2d-706b98d39f8d/pull/0.log" Dec 05 20:17:07 crc kubenswrapper[4828]: I1205 20:17:07.325264 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj_800451e0-a385-4d99-ab2d-706b98d39f8d/pull/0.log" Dec 05 20:17:07 crc kubenswrapper[4828]: I1205 20:17:07.486685 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj_800451e0-a385-4d99-ab2d-706b98d39f8d/pull/0.log" Dec 05 20:17:07 crc kubenswrapper[4828]: I1205 20:17:07.492676 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj_800451e0-a385-4d99-ab2d-706b98d39f8d/extract/0.log" Dec 05 20:17:07 crc kubenswrapper[4828]: I1205 20:17:07.493870 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ftqqqj_800451e0-a385-4d99-ab2d-706b98d39f8d/util/0.log" Dec 05 20:17:07 crc kubenswrapper[4828]: I1205 20:17:07.633376 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz_80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0/util/0.log" Dec 05 20:17:07 crc kubenswrapper[4828]: I1205 20:17:07.786314 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz_80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0/pull/0.log" Dec 05 20:17:07 crc kubenswrapper[4828]: I1205 20:17:07.811168 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz_80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0/util/0.log" Dec 05 20:17:07 crc kubenswrapper[4828]: I1205 20:17:07.835301 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz_80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0/pull/0.log" Dec 05 20:17:07 crc kubenswrapper[4828]: I1205 20:17:07.991723 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz_80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0/util/0.log" Dec 05 20:17:08 crc kubenswrapper[4828]: I1205 20:17:08.001320 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz_80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0/pull/0.log" Dec 05 20:17:08 crc kubenswrapper[4828]: I1205 20:17:08.029448 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wqwgz_80fa4fac-e710-4e46-8bd8-c8b4cb35b9f0/extract/0.log" Dec 05 20:17:08 crc kubenswrapper[4828]: I1205 20:17:08.160052 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2tlrr_1e56fecb-3765-4d29-9c94-02257c7e655b/extract-utilities/0.log" Dec 05 20:17:08 crc kubenswrapper[4828]: I1205 20:17:08.328603 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2tlrr_1e56fecb-3765-4d29-9c94-02257c7e655b/extract-utilities/0.log" Dec 05 20:17:08 crc kubenswrapper[4828]: I1205 20:17:08.344698 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2tlrr_1e56fecb-3765-4d29-9c94-02257c7e655b/extract-content/0.log" Dec 05 20:17:08 crc kubenswrapper[4828]: I1205 20:17:08.347217 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2tlrr_1e56fecb-3765-4d29-9c94-02257c7e655b/extract-content/0.log" Dec 05 20:17:08 crc kubenswrapper[4828]: I1205 20:17:08.561882 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2tlrr_1e56fecb-3765-4d29-9c94-02257c7e655b/extract-utilities/0.log" Dec 05 20:17:08 crc kubenswrapper[4828]: I1205 20:17:08.578260 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2tlrr_1e56fecb-3765-4d29-9c94-02257c7e655b/extract-content/0.log" Dec 05 20:17:08 crc kubenswrapper[4828]: I1205 20:17:08.824429 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bch5n_d1b4b588-b3c8-4a99-b13c-89413002545e/extract-utilities/0.log" Dec 05 20:17:09 crc kubenswrapper[4828]: I1205 20:17:09.054110 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bch5n_d1b4b588-b3c8-4a99-b13c-89413002545e/extract-content/0.log" Dec 05 20:17:09 crc kubenswrapper[4828]: I1205 20:17:09.076226 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bch5n_d1b4b588-b3c8-4a99-b13c-89413002545e/extract-utilities/0.log" Dec 05 20:17:09 crc kubenswrapper[4828]: I1205 20:17:09.089200 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bch5n_d1b4b588-b3c8-4a99-b13c-89413002545e/extract-content/0.log" Dec 05 20:17:09 crc kubenswrapper[4828]: I1205 20:17:09.199167 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2tlrr_1e56fecb-3765-4d29-9c94-02257c7e655b/registry-server/0.log" Dec 05 20:17:09 crc kubenswrapper[4828]: I1205 20:17:09.278200 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bch5n_d1b4b588-b3c8-4a99-b13c-89413002545e/extract-utilities/0.log" Dec 05 20:17:09 crc kubenswrapper[4828]: I1205 20:17:09.294507 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bch5n_d1b4b588-b3c8-4a99-b13c-89413002545e/extract-content/0.log" Dec 05 20:17:09 crc kubenswrapper[4828]: I1205 20:17:09.474323 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-9dx6f_57b2cfb8-3ff2-4192-a272-2d6ef4ced1cd/marketplace-operator/0.log" Dec 05 20:17:09 crc kubenswrapper[4828]: I1205 20:17:09.640105 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pmljs_4a4ba139-0d26-4f2c-b265-35af463685f1/extract-utilities/0.log" Dec 05 20:17:09 crc kubenswrapper[4828]: I1205 20:17:09.850482 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pmljs_4a4ba139-0d26-4f2c-b265-35af463685f1/extract-content/0.log" Dec 05 20:17:09 crc kubenswrapper[4828]: I1205 20:17:09.896308 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pmljs_4a4ba139-0d26-4f2c-b265-35af463685f1/extract-content/0.log" Dec 05 20:17:09 crc kubenswrapper[4828]: I1205 20:17:09.961900 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pmljs_4a4ba139-0d26-4f2c-b265-35af463685f1/extract-utilities/0.log" Dec 05 20:17:10 crc kubenswrapper[4828]: I1205 20:17:10.032743 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bch5n_d1b4b588-b3c8-4a99-b13c-89413002545e/registry-server/0.log" Dec 05 20:17:10 crc kubenswrapper[4828]: I1205 20:17:10.088140 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pmljs_4a4ba139-0d26-4f2c-b265-35af463685f1/extract-utilities/0.log" Dec 05 20:17:10 crc kubenswrapper[4828]: I1205 20:17:10.088544 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pmljs_4a4ba139-0d26-4f2c-b265-35af463685f1/extract-content/0.log" Dec 05 20:17:10 crc kubenswrapper[4828]: I1205 20:17:10.299173 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pmljs_4a4ba139-0d26-4f2c-b265-35af463685f1/registry-server/0.log" Dec 05 20:17:10 crc kubenswrapper[4828]: I1205 20:17:10.339295 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ztsk4_8a85d300-213b-4bda-aff7-73bc53e7e246/extract-utilities/0.log" Dec 05 20:17:10 crc kubenswrapper[4828]: I1205 20:17:10.511206 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ztsk4_8a85d300-213b-4bda-aff7-73bc53e7e246/extract-content/0.log" Dec 05 20:17:10 crc kubenswrapper[4828]: I1205 20:17:10.513845 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ztsk4_8a85d300-213b-4bda-aff7-73bc53e7e246/extract-utilities/0.log" Dec 05 20:17:10 crc kubenswrapper[4828]: I1205 20:17:10.515204 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ztsk4_8a85d300-213b-4bda-aff7-73bc53e7e246/extract-content/0.log" Dec 05 20:17:10 crc kubenswrapper[4828]: I1205 20:17:10.676629 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ztsk4_8a85d300-213b-4bda-aff7-73bc53e7e246/extract-utilities/0.log" Dec 05 20:17:10 crc kubenswrapper[4828]: I1205 20:17:10.677980 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ztsk4_8a85d300-213b-4bda-aff7-73bc53e7e246/extract-content/0.log" Dec 05 20:17:11 crc kubenswrapper[4828]: I1205 20:17:11.359022 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ztsk4_8a85d300-213b-4bda-aff7-73bc53e7e246/registry-server/0.log" Dec 05 20:17:18 crc kubenswrapper[4828]: I1205 20:17:18.448548 4828 scope.go:117] "RemoveContainer" containerID="669b952c3ede85f0975d95c6d6a646122e5087b1c3258c23ed6e56f0390472d9" Dec 05 20:17:18 crc kubenswrapper[4828]: E1205 20:17:18.449416 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:17:19 crc kubenswrapper[4828]: I1205 20:17:19.447505 4828 scope.go:117] "RemoveContainer" containerID="09ef2c76d82d57f95dccc1440fbb7d812fad2288311e423396f4916955e3011c" Dec 05 20:17:19 crc kubenswrapper[4828]: E1205 20:17:19.448506 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:17:31 crc kubenswrapper[4828]: I1205 20:17:31.446803 4828 scope.go:117] "RemoveContainer" containerID="669b952c3ede85f0975d95c6d6a646122e5087b1c3258c23ed6e56f0390472d9" Dec 05 20:17:31 crc kubenswrapper[4828]: I1205 20:17:31.447607 4828 scope.go:117] "RemoveContainer" containerID="09ef2c76d82d57f95dccc1440fbb7d812fad2288311e423396f4916955e3011c" Dec 05 20:17:31 crc kubenswrapper[4828]: E1205 20:17:31.447627 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:17:31 crc kubenswrapper[4828]: E1205 20:17:31.447868 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:17:44 crc kubenswrapper[4828]: I1205 20:17:44.446285 4828 scope.go:117] "RemoveContainer" containerID="669b952c3ede85f0975d95c6d6a646122e5087b1c3258c23ed6e56f0390472d9" Dec 05 20:17:44 crc kubenswrapper[4828]: E1205 20:17:44.447392 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:17:44 crc kubenswrapper[4828]: I1205 20:17:44.450268 4828 scope.go:117] "RemoveContainer" containerID="09ef2c76d82d57f95dccc1440fbb7d812fad2288311e423396f4916955e3011c" Dec 05 20:17:44 crc kubenswrapper[4828]: E1205 20:17:44.450590 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:17:55 crc kubenswrapper[4828]: I1205 20:17:55.447279 4828 scope.go:117] "RemoveContainer" containerID="669b952c3ede85f0975d95c6d6a646122e5087b1c3258c23ed6e56f0390472d9" Dec 05 20:17:55 crc kubenswrapper[4828]: E1205 20:17:55.450780 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:17:59 crc kubenswrapper[4828]: I1205 20:17:59.447304 4828 scope.go:117] "RemoveContainer" containerID="09ef2c76d82d57f95dccc1440fbb7d812fad2288311e423396f4916955e3011c" Dec 05 20:17:59 crc kubenswrapper[4828]: E1205 20:17:59.449420 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:18:07 crc kubenswrapper[4828]: I1205 20:18:07.446640 4828 scope.go:117] "RemoveContainer" containerID="669b952c3ede85f0975d95c6d6a646122e5087b1c3258c23ed6e56f0390472d9" Dec 05 20:18:07 crc kubenswrapper[4828]: E1205 20:18:07.447947 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:18:13 crc kubenswrapper[4828]: I1205 20:18:13.447009 4828 scope.go:117] "RemoveContainer" containerID="09ef2c76d82d57f95dccc1440fbb7d812fad2288311e423396f4916955e3011c" Dec 05 20:18:13 crc kubenswrapper[4828]: E1205 20:18:13.448175 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:18:21 crc kubenswrapper[4828]: I1205 20:18:21.446927 4828 scope.go:117] "RemoveContainer" containerID="669b952c3ede85f0975d95c6d6a646122e5087b1c3258c23ed6e56f0390472d9" Dec 05 20:18:21 crc kubenswrapper[4828]: E1205 20:18:21.447650 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:18:28 crc kubenswrapper[4828]: I1205 20:18:28.446697 4828 scope.go:117] "RemoveContainer" containerID="09ef2c76d82d57f95dccc1440fbb7d812fad2288311e423396f4916955e3011c" Dec 05 20:18:28 crc kubenswrapper[4828]: E1205 20:18:28.448564 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:18:33 crc kubenswrapper[4828]: I1205 20:18:33.447804 4828 scope.go:117] "RemoveContainer" containerID="669b952c3ede85f0975d95c6d6a646122e5087b1c3258c23ed6e56f0390472d9" Dec 05 20:18:33 crc kubenswrapper[4828]: E1205 20:18:33.449298 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:18:41 crc kubenswrapper[4828]: I1205 20:18:41.446296 4828 scope.go:117] "RemoveContainer" containerID="09ef2c76d82d57f95dccc1440fbb7d812fad2288311e423396f4916955e3011c" Dec 05 20:18:41 crc kubenswrapper[4828]: E1205 20:18:41.447010 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:18:48 crc kubenswrapper[4828]: I1205 20:18:48.453288 4828 scope.go:117] "RemoveContainer" containerID="669b952c3ede85f0975d95c6d6a646122e5087b1c3258c23ed6e56f0390472d9" Dec 05 20:18:48 crc kubenswrapper[4828]: E1205 20:18:48.454100 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:18:54 crc kubenswrapper[4828]: I1205 20:18:54.446959 4828 scope.go:117] "RemoveContainer" containerID="09ef2c76d82d57f95dccc1440fbb7d812fad2288311e423396f4916955e3011c" Dec 05 20:18:54 crc kubenswrapper[4828]: E1205 20:18:54.447774 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:18:54 crc kubenswrapper[4828]: I1205 20:18:54.707712 4828 generic.go:334] "Generic (PLEG): container finished" podID="6f76f688-de64-4b0a-b32d-2a56bfd43fd9" containerID="20b3891944e57cde92a7931a9d6a5d114121fa973a788188dde02fb1107055c3" exitCode=0 Dec 05 20:18:54 crc kubenswrapper[4828]: I1205 20:18:54.707811 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wgdgv/must-gather-r494b" event={"ID":"6f76f688-de64-4b0a-b32d-2a56bfd43fd9","Type":"ContainerDied","Data":"20b3891944e57cde92a7931a9d6a5d114121fa973a788188dde02fb1107055c3"} Dec 05 20:18:54 crc kubenswrapper[4828]: I1205 20:18:54.708971 4828 scope.go:117] "RemoveContainer" containerID="20b3891944e57cde92a7931a9d6a5d114121fa973a788188dde02fb1107055c3" Dec 05 20:18:54 crc kubenswrapper[4828]: E1205 20:18:54.783731 4828 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f76f688_de64_4b0a_b32d_2a56bfd43fd9.slice/crio-20b3891944e57cde92a7931a9d6a5d114121fa973a788188dde02fb1107055c3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f76f688_de64_4b0a_b32d_2a56bfd43fd9.slice/crio-conmon-20b3891944e57cde92a7931a9d6a5d114121fa973a788188dde02fb1107055c3.scope\": RecentStats: unable to find data in memory cache]" Dec 05 20:18:55 crc kubenswrapper[4828]: I1205 20:18:55.663239 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wgdgv_must-gather-r494b_6f76f688-de64-4b0a-b32d-2a56bfd43fd9/gather/0.log" Dec 05 20:19:03 crc kubenswrapper[4828]: I1205 20:19:03.448200 4828 scope.go:117] "RemoveContainer" containerID="669b952c3ede85f0975d95c6d6a646122e5087b1c3258c23ed6e56f0390472d9" Dec 05 20:19:03 crc kubenswrapper[4828]: E1205 20:19:03.449005 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:19:06 crc kubenswrapper[4828]: I1205 20:19:06.582216 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wgdgv/must-gather-r494b"] Dec 05 20:19:06 crc kubenswrapper[4828]: I1205 20:19:06.582518 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wgdgv/must-gather-r494b"] Dec 05 20:19:06 crc kubenswrapper[4828]: I1205 20:19:06.582688 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-wgdgv/must-gather-r494b" podUID="6f76f688-de64-4b0a-b32d-2a56bfd43fd9" containerName="copy" containerID="cri-o://9a09ce9525d53eda4a7b3d2ffc66379f3b2897eeda84dbd6d6780cff1a8c4bc5" gracePeriod=2 Dec 05 20:19:06 crc kubenswrapper[4828]: I1205 20:19:06.846632 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wgdgv_must-gather-r494b_6f76f688-de64-4b0a-b32d-2a56bfd43fd9/copy/0.log" Dec 05 20:19:06 crc kubenswrapper[4828]: I1205 20:19:06.857345 4828 generic.go:334] "Generic (PLEG): container finished" podID="6f76f688-de64-4b0a-b32d-2a56bfd43fd9" containerID="9a09ce9525d53eda4a7b3d2ffc66379f3b2897eeda84dbd6d6780cff1a8c4bc5" exitCode=143 Dec 05 20:19:07 crc kubenswrapper[4828]: I1205 20:19:07.066133 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wgdgv_must-gather-r494b_6f76f688-de64-4b0a-b32d-2a56bfd43fd9/copy/0.log" Dec 05 20:19:07 crc kubenswrapper[4828]: I1205 20:19:07.068110 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wgdgv/must-gather-r494b" Dec 05 20:19:07 crc kubenswrapper[4828]: I1205 20:19:07.169323 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6f76f688-de64-4b0a-b32d-2a56bfd43fd9-must-gather-output\") pod \"6f76f688-de64-4b0a-b32d-2a56bfd43fd9\" (UID: \"6f76f688-de64-4b0a-b32d-2a56bfd43fd9\") " Dec 05 20:19:07 crc kubenswrapper[4828]: I1205 20:19:07.169403 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlsjc\" (UniqueName: \"kubernetes.io/projected/6f76f688-de64-4b0a-b32d-2a56bfd43fd9-kube-api-access-tlsjc\") pod \"6f76f688-de64-4b0a-b32d-2a56bfd43fd9\" (UID: \"6f76f688-de64-4b0a-b32d-2a56bfd43fd9\") " Dec 05 20:19:07 crc kubenswrapper[4828]: I1205 20:19:07.177108 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f76f688-de64-4b0a-b32d-2a56bfd43fd9-kube-api-access-tlsjc" (OuterVolumeSpecName: "kube-api-access-tlsjc") pod "6f76f688-de64-4b0a-b32d-2a56bfd43fd9" (UID: "6f76f688-de64-4b0a-b32d-2a56bfd43fd9"). InnerVolumeSpecName "kube-api-access-tlsjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 20:19:07 crc kubenswrapper[4828]: I1205 20:19:07.271296 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlsjc\" (UniqueName: \"kubernetes.io/projected/6f76f688-de64-4b0a-b32d-2a56bfd43fd9-kube-api-access-tlsjc\") on node \"crc\" DevicePath \"\"" Dec 05 20:19:07 crc kubenswrapper[4828]: I1205 20:19:07.327329 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f76f688-de64-4b0a-b32d-2a56bfd43fd9-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "6f76f688-de64-4b0a-b32d-2a56bfd43fd9" (UID: "6f76f688-de64-4b0a-b32d-2a56bfd43fd9"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 20:19:07 crc kubenswrapper[4828]: I1205 20:19:07.373592 4828 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6f76f688-de64-4b0a-b32d-2a56bfd43fd9-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 05 20:19:07 crc kubenswrapper[4828]: I1205 20:19:07.867504 4828 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wgdgv_must-gather-r494b_6f76f688-de64-4b0a-b32d-2a56bfd43fd9/copy/0.log" Dec 05 20:19:07 crc kubenswrapper[4828]: I1205 20:19:07.867920 4828 scope.go:117] "RemoveContainer" containerID="9a09ce9525d53eda4a7b3d2ffc66379f3b2897eeda84dbd6d6780cff1a8c4bc5" Dec 05 20:19:07 crc kubenswrapper[4828]: I1205 20:19:07.867960 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wgdgv/must-gather-r494b" Dec 05 20:19:07 crc kubenswrapper[4828]: I1205 20:19:07.904922 4828 scope.go:117] "RemoveContainer" containerID="20b3891944e57cde92a7931a9d6a5d114121fa973a788188dde02fb1107055c3" Dec 05 20:19:08 crc kubenswrapper[4828]: I1205 20:19:08.460057 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f76f688-de64-4b0a-b32d-2a56bfd43fd9" path="/var/lib/kubelet/pods/6f76f688-de64-4b0a-b32d-2a56bfd43fd9/volumes" Dec 05 20:19:09 crc kubenswrapper[4828]: I1205 20:19:09.446583 4828 scope.go:117] "RemoveContainer" containerID="09ef2c76d82d57f95dccc1440fbb7d812fad2288311e423396f4916955e3011c" Dec 05 20:19:09 crc kubenswrapper[4828]: E1205 20:19:09.446881 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:19:15 crc kubenswrapper[4828]: I1205 20:19:15.447791 4828 scope.go:117] "RemoveContainer" containerID="669b952c3ede85f0975d95c6d6a646122e5087b1c3258c23ed6e56f0390472d9" Dec 05 20:19:15 crc kubenswrapper[4828]: E1205 20:19:15.448474 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:19:23 crc kubenswrapper[4828]: I1205 20:19:23.448497 4828 scope.go:117] "RemoveContainer" containerID="09ef2c76d82d57f95dccc1440fbb7d812fad2288311e423396f4916955e3011c" Dec 05 20:19:23 crc kubenswrapper[4828]: E1205 20:19:23.449418 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:19:27 crc kubenswrapper[4828]: I1205 20:19:27.447105 4828 scope.go:117] "RemoveContainer" containerID="669b952c3ede85f0975d95c6d6a646122e5087b1c3258c23ed6e56f0390472d9" Dec 05 20:19:27 crc kubenswrapper[4828]: E1205 20:19:27.448215 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:19:38 crc kubenswrapper[4828]: I1205 20:19:38.446912 4828 scope.go:117] "RemoveContainer" containerID="09ef2c76d82d57f95dccc1440fbb7d812fad2288311e423396f4916955e3011c" Dec 05 20:19:38 crc kubenswrapper[4828]: E1205 20:19:38.447959 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:19:41 crc kubenswrapper[4828]: I1205 20:19:41.446410 4828 scope.go:117] "RemoveContainer" containerID="669b952c3ede85f0975d95c6d6a646122e5087b1c3258c23ed6e56f0390472d9" Dec 05 20:19:41 crc kubenswrapper[4828]: E1205 20:19:41.447037 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:19:49 crc kubenswrapper[4828]: I1205 20:19:49.448890 4828 scope.go:117] "RemoveContainer" containerID="09ef2c76d82d57f95dccc1440fbb7d812fad2288311e423396f4916955e3011c" Dec 05 20:19:49 crc kubenswrapper[4828]: E1205 20:19:49.450455 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:19:52 crc kubenswrapper[4828]: I1205 20:19:52.452273 4828 scope.go:117] "RemoveContainer" containerID="669b952c3ede85f0975d95c6d6a646122e5087b1c3258c23ed6e56f0390472d9" Dec 05 20:19:52 crc kubenswrapper[4828]: E1205 20:19:52.452726 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" Dec 05 20:20:02 crc kubenswrapper[4828]: I1205 20:20:02.457509 4828 scope.go:117] "RemoveContainer" containerID="09ef2c76d82d57f95dccc1440fbb7d812fad2288311e423396f4916955e3011c" Dec 05 20:20:02 crc kubenswrapper[4828]: E1205 20:20:02.458552 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:20:07 crc kubenswrapper[4828]: I1205 20:20:07.446346 4828 scope.go:117] "RemoveContainer" containerID="669b952c3ede85f0975d95c6d6a646122e5087b1c3258c23ed6e56f0390472d9" Dec 05 20:20:08 crc kubenswrapper[4828]: I1205 20:20:08.502070 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" event={"ID":"03c4fc5d-6be1-47b4-9c39-7bb86046dafd","Type":"ContainerStarted","Data":"6d28ae947c16b71fcce3b34789e59917d8e7cc31bc37e63e6c02a6345d493b6c"} Dec 05 20:20:08 crc kubenswrapper[4828]: I1205 20:20:08.503657 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 20:20:15 crc kubenswrapper[4828]: I1205 20:20:15.124880 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" Dec 05 20:20:17 crc kubenswrapper[4828]: I1205 20:20:17.448401 4828 scope.go:117] "RemoveContainer" containerID="09ef2c76d82d57f95dccc1440fbb7d812fad2288311e423396f4916955e3011c" Dec 05 20:20:17 crc kubenswrapper[4828]: E1205 20:20:17.449281 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:20:31 crc kubenswrapper[4828]: I1205 20:20:31.448310 4828 scope.go:117] "RemoveContainer" containerID="09ef2c76d82d57f95dccc1440fbb7d812fad2288311e423396f4916955e3011c" Dec 05 20:20:31 crc kubenswrapper[4828]: E1205 20:20:31.449156 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nlqsv_openshift-machine-config-operator(a74199c1-79be-49b4-9c04-fdb48847c85e)\"" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" podUID="a74199c1-79be-49b4-9c04-fdb48847c85e" Dec 05 20:20:35 crc kubenswrapper[4828]: I1205 20:20:35.691977 4828 scope.go:117] "RemoveContainer" containerID="8c0031e797a7b628a80dec7106ee796d13906e381ce59d60a11b316829235d9d" Dec 05 20:20:42 crc kubenswrapper[4828]: I1205 20:20:42.455504 4828 scope.go:117] "RemoveContainer" containerID="09ef2c76d82d57f95dccc1440fbb7d812fad2288311e423396f4916955e3011c" Dec 05 20:20:42 crc kubenswrapper[4828]: I1205 20:20:42.859868 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nlqsv" event={"ID":"a74199c1-79be-49b4-9c04-fdb48847c85e","Type":"ContainerStarted","Data":"bfb876c25bf112b00ca4f1c9b51def08d5efe798403997d98bf766ea3ec16e72"} Dec 05 20:20:52 crc kubenswrapper[4828]: I1205 20:20:52.991493 4828 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pcvb4"] Dec 05 20:20:52 crc kubenswrapper[4828]: E1205 20:20:52.993668 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bbe4d2c-0e99-4963-b66d-4e000155093c" containerName="registry-server" Dec 05 20:20:52 crc kubenswrapper[4828]: I1205 20:20:52.993775 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bbe4d2c-0e99-4963-b66d-4e000155093c" containerName="registry-server" Dec 05 20:20:52 crc kubenswrapper[4828]: E1205 20:20:52.993932 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bbe4d2c-0e99-4963-b66d-4e000155093c" containerName="extract-content" Dec 05 20:20:52 crc kubenswrapper[4828]: I1205 20:20:52.994042 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bbe4d2c-0e99-4963-b66d-4e000155093c" containerName="extract-content" Dec 05 20:20:52 crc kubenswrapper[4828]: E1205 20:20:52.994146 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e81b5974-0c81-49f9-a27b-a7a725e526e5" containerName="registry-server" Dec 05 20:20:52 crc kubenswrapper[4828]: I1205 20:20:52.994417 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="e81b5974-0c81-49f9-a27b-a7a725e526e5" containerName="registry-server" Dec 05 20:20:52 crc kubenswrapper[4828]: E1205 20:20:52.994502 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e81b5974-0c81-49f9-a27b-a7a725e526e5" containerName="extract-utilities" Dec 05 20:20:52 crc kubenswrapper[4828]: I1205 20:20:52.994581 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="e81b5974-0c81-49f9-a27b-a7a725e526e5" containerName="extract-utilities" Dec 05 20:20:52 crc kubenswrapper[4828]: E1205 20:20:52.994672 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e81b5974-0c81-49f9-a27b-a7a725e526e5" containerName="extract-content" Dec 05 20:20:52 crc kubenswrapper[4828]: I1205 20:20:52.994751 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="e81b5974-0c81-49f9-a27b-a7a725e526e5" containerName="extract-content" Dec 05 20:20:52 crc kubenswrapper[4828]: E1205 20:20:52.994864 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f76f688-de64-4b0a-b32d-2a56bfd43fd9" containerName="gather" Dec 05 20:20:52 crc kubenswrapper[4828]: I1205 20:20:52.994966 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f76f688-de64-4b0a-b32d-2a56bfd43fd9" containerName="gather" Dec 05 20:20:52 crc kubenswrapper[4828]: E1205 20:20:52.995064 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f76f688-de64-4b0a-b32d-2a56bfd43fd9" containerName="copy" Dec 05 20:20:52 crc kubenswrapper[4828]: I1205 20:20:52.995144 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f76f688-de64-4b0a-b32d-2a56bfd43fd9" containerName="copy" Dec 05 20:20:52 crc kubenswrapper[4828]: E1205 20:20:52.995224 4828 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bbe4d2c-0e99-4963-b66d-4e000155093c" containerName="extract-utilities" Dec 05 20:20:52 crc kubenswrapper[4828]: I1205 20:20:52.995309 4828 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bbe4d2c-0e99-4963-b66d-4e000155093c" containerName="extract-utilities" Dec 05 20:20:52 crc kubenswrapper[4828]: I1205 20:20:52.995593 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f76f688-de64-4b0a-b32d-2a56bfd43fd9" containerName="gather" Dec 05 20:20:52 crc kubenswrapper[4828]: I1205 20:20:52.995700 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bbe4d2c-0e99-4963-b66d-4e000155093c" containerName="registry-server" Dec 05 20:20:52 crc kubenswrapper[4828]: I1205 20:20:52.995783 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f76f688-de64-4b0a-b32d-2a56bfd43fd9" containerName="copy" Dec 05 20:20:52 crc kubenswrapper[4828]: I1205 20:20:52.995905 4828 memory_manager.go:354] "RemoveStaleState removing state" podUID="e81b5974-0c81-49f9-a27b-a7a725e526e5" containerName="registry-server" Dec 05 20:20:53 crc kubenswrapper[4828]: I1205 20:20:53.000041 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pcvb4" Dec 05 20:20:53 crc kubenswrapper[4828]: I1205 20:20:53.008303 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pcvb4"] Dec 05 20:20:53 crc kubenswrapper[4828]: I1205 20:20:53.110347 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829eb49c-2ec2-4412-94ba-6c4049ece2eb-utilities\") pod \"redhat-operators-pcvb4\" (UID: \"829eb49c-2ec2-4412-94ba-6c4049ece2eb\") " pod="openshift-marketplace/redhat-operators-pcvb4" Dec 05 20:20:53 crc kubenswrapper[4828]: I1205 20:20:53.111079 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndqhg\" (UniqueName: \"kubernetes.io/projected/829eb49c-2ec2-4412-94ba-6c4049ece2eb-kube-api-access-ndqhg\") pod \"redhat-operators-pcvb4\" (UID: \"829eb49c-2ec2-4412-94ba-6c4049ece2eb\") " pod="openshift-marketplace/redhat-operators-pcvb4" Dec 05 20:20:53 crc kubenswrapper[4828]: I1205 20:20:53.111314 4828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829eb49c-2ec2-4412-94ba-6c4049ece2eb-catalog-content\") pod \"redhat-operators-pcvb4\" (UID: \"829eb49c-2ec2-4412-94ba-6c4049ece2eb\") " pod="openshift-marketplace/redhat-operators-pcvb4" Dec 05 20:20:53 crc kubenswrapper[4828]: I1205 20:20:53.212850 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829eb49c-2ec2-4412-94ba-6c4049ece2eb-catalog-content\") pod \"redhat-operators-pcvb4\" (UID: \"829eb49c-2ec2-4412-94ba-6c4049ece2eb\") " pod="openshift-marketplace/redhat-operators-pcvb4" Dec 05 20:20:53 crc kubenswrapper[4828]: I1205 20:20:53.212919 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829eb49c-2ec2-4412-94ba-6c4049ece2eb-utilities\") pod \"redhat-operators-pcvb4\" (UID: \"829eb49c-2ec2-4412-94ba-6c4049ece2eb\") " pod="openshift-marketplace/redhat-operators-pcvb4" Dec 05 20:20:53 crc kubenswrapper[4828]: I1205 20:20:53.213023 4828 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndqhg\" (UniqueName: \"kubernetes.io/projected/829eb49c-2ec2-4412-94ba-6c4049ece2eb-kube-api-access-ndqhg\") pod \"redhat-operators-pcvb4\" (UID: \"829eb49c-2ec2-4412-94ba-6c4049ece2eb\") " pod="openshift-marketplace/redhat-operators-pcvb4" Dec 05 20:20:53 crc kubenswrapper[4828]: I1205 20:20:53.213300 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829eb49c-2ec2-4412-94ba-6c4049ece2eb-catalog-content\") pod \"redhat-operators-pcvb4\" (UID: \"829eb49c-2ec2-4412-94ba-6c4049ece2eb\") " pod="openshift-marketplace/redhat-operators-pcvb4" Dec 05 20:20:53 crc kubenswrapper[4828]: I1205 20:20:53.213536 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829eb49c-2ec2-4412-94ba-6c4049ece2eb-utilities\") pod \"redhat-operators-pcvb4\" (UID: \"829eb49c-2ec2-4412-94ba-6c4049ece2eb\") " pod="openshift-marketplace/redhat-operators-pcvb4" Dec 05 20:20:53 crc kubenswrapper[4828]: I1205 20:20:53.237403 4828 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndqhg\" (UniqueName: \"kubernetes.io/projected/829eb49c-2ec2-4412-94ba-6c4049ece2eb-kube-api-access-ndqhg\") pod \"redhat-operators-pcvb4\" (UID: \"829eb49c-2ec2-4412-94ba-6c4049ece2eb\") " pod="openshift-marketplace/redhat-operators-pcvb4" Dec 05 20:20:53 crc kubenswrapper[4828]: I1205 20:20:53.334776 4828 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pcvb4" Dec 05 20:20:53 crc kubenswrapper[4828]: I1205 20:20:53.770442 4828 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pcvb4"] Dec 05 20:20:53 crc kubenswrapper[4828]: I1205 20:20:53.971298 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pcvb4" event={"ID":"829eb49c-2ec2-4412-94ba-6c4049ece2eb","Type":"ContainerStarted","Data":"eeb6716ca35b1b1edd98c1ef152a2706df33b53a03d198718e809cbaef810832"} Dec 05 20:20:54 crc kubenswrapper[4828]: I1205 20:20:54.980647 4828 generic.go:334] "Generic (PLEG): container finished" podID="829eb49c-2ec2-4412-94ba-6c4049ece2eb" containerID="75d3ce7dff894dc4abafa15af28abf543551caef512a5f2350b86d01b6a0ba2a" exitCode=0 Dec 05 20:20:54 crc kubenswrapper[4828]: I1205 20:20:54.980710 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pcvb4" event={"ID":"829eb49c-2ec2-4412-94ba-6c4049ece2eb","Type":"ContainerDied","Data":"75d3ce7dff894dc4abafa15af28abf543551caef512a5f2350b86d01b6a0ba2a"} Dec 05 20:20:54 crc kubenswrapper[4828]: I1205 20:20:54.983998 4828 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 20:20:57 crc kubenswrapper[4828]: I1205 20:20:57.019145 4828 generic.go:334] "Generic (PLEG): container finished" podID="829eb49c-2ec2-4412-94ba-6c4049ece2eb" containerID="b1008df67af114064a45de51b52bb31c0cbb1d38e838e41598d1028dcd09e3c5" exitCode=0 Dec 05 20:20:57 crc kubenswrapper[4828]: I1205 20:20:57.019265 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pcvb4" event={"ID":"829eb49c-2ec2-4412-94ba-6c4049ece2eb","Type":"ContainerDied","Data":"b1008df67af114064a45de51b52bb31c0cbb1d38e838e41598d1028dcd09e3c5"} Dec 05 20:20:58 crc kubenswrapper[4828]: I1205 20:20:58.031992 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pcvb4" event={"ID":"829eb49c-2ec2-4412-94ba-6c4049ece2eb","Type":"ContainerStarted","Data":"b89fd87bec7426a1ba3ae48a001ecc78c5198928e7339ffa62dd71ea6fbba073"} Dec 05 20:20:58 crc kubenswrapper[4828]: I1205 20:20:58.053785 4828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pcvb4" podStartSLOduration=3.542629156 podStartE2EDuration="6.053765534s" podCreationTimestamp="2025-12-05 20:20:52 +0000 UTC" firstStartedPulling="2025-12-05 20:20:54.983671733 +0000 UTC m=+4632.878894059" lastFinishedPulling="2025-12-05 20:20:57.494808121 +0000 UTC m=+4635.390030437" observedRunningTime="2025-12-05 20:20:58.050705221 +0000 UTC m=+4635.945927547" watchObservedRunningTime="2025-12-05 20:20:58.053765534 +0000 UTC m=+4635.948987840" Dec 05 20:21:03 crc kubenswrapper[4828]: I1205 20:21:03.336129 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pcvb4" Dec 05 20:21:03 crc kubenswrapper[4828]: I1205 20:21:03.336735 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pcvb4" Dec 05 20:21:04 crc kubenswrapper[4828]: I1205 20:21:04.406802 4828 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pcvb4" podUID="829eb49c-2ec2-4412-94ba-6c4049ece2eb" containerName="registry-server" probeResult="failure" output=< Dec 05 20:21:04 crc kubenswrapper[4828]: timeout: failed to connect service ":50051" within 1s Dec 05 20:21:04 crc kubenswrapper[4828]: > Dec 05 20:21:13 crc kubenswrapper[4828]: I1205 20:21:13.381611 4828 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pcvb4" Dec 05 20:21:13 crc kubenswrapper[4828]: I1205 20:21:13.433442 4828 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pcvb4" Dec 05 20:21:13 crc kubenswrapper[4828]: I1205 20:21:13.635196 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pcvb4"] Dec 05 20:21:15 crc kubenswrapper[4828]: I1205 20:21:15.263275 4828 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pcvb4" podUID="829eb49c-2ec2-4412-94ba-6c4049ece2eb" containerName="registry-server" containerID="cri-o://b89fd87bec7426a1ba3ae48a001ecc78c5198928e7339ffa62dd71ea6fbba073" gracePeriod=2 Dec 05 20:21:15 crc kubenswrapper[4828]: I1205 20:21:15.707607 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pcvb4" Dec 05 20:21:15 crc kubenswrapper[4828]: I1205 20:21:15.846569 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829eb49c-2ec2-4412-94ba-6c4049ece2eb-catalog-content\") pod \"829eb49c-2ec2-4412-94ba-6c4049ece2eb\" (UID: \"829eb49c-2ec2-4412-94ba-6c4049ece2eb\") " Dec 05 20:21:15 crc kubenswrapper[4828]: I1205 20:21:15.846846 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829eb49c-2ec2-4412-94ba-6c4049ece2eb-utilities\") pod \"829eb49c-2ec2-4412-94ba-6c4049ece2eb\" (UID: \"829eb49c-2ec2-4412-94ba-6c4049ece2eb\") " Dec 05 20:21:15 crc kubenswrapper[4828]: I1205 20:21:15.846946 4828 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndqhg\" (UniqueName: \"kubernetes.io/projected/829eb49c-2ec2-4412-94ba-6c4049ece2eb-kube-api-access-ndqhg\") pod \"829eb49c-2ec2-4412-94ba-6c4049ece2eb\" (UID: \"829eb49c-2ec2-4412-94ba-6c4049ece2eb\") " Dec 05 20:21:15 crc kubenswrapper[4828]: I1205 20:21:15.847648 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/829eb49c-2ec2-4412-94ba-6c4049ece2eb-utilities" (OuterVolumeSpecName: "utilities") pod "829eb49c-2ec2-4412-94ba-6c4049ece2eb" (UID: "829eb49c-2ec2-4412-94ba-6c4049ece2eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 20:21:15 crc kubenswrapper[4828]: I1205 20:21:15.857057 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/829eb49c-2ec2-4412-94ba-6c4049ece2eb-kube-api-access-ndqhg" (OuterVolumeSpecName: "kube-api-access-ndqhg") pod "829eb49c-2ec2-4412-94ba-6c4049ece2eb" (UID: "829eb49c-2ec2-4412-94ba-6c4049ece2eb"). InnerVolumeSpecName "kube-api-access-ndqhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 20:21:15 crc kubenswrapper[4828]: I1205 20:21:15.950004 4828 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829eb49c-2ec2-4412-94ba-6c4049ece2eb-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 20:21:15 crc kubenswrapper[4828]: I1205 20:21:15.950031 4828 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndqhg\" (UniqueName: \"kubernetes.io/projected/829eb49c-2ec2-4412-94ba-6c4049ece2eb-kube-api-access-ndqhg\") on node \"crc\" DevicePath \"\"" Dec 05 20:21:15 crc kubenswrapper[4828]: I1205 20:21:15.960226 4828 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/829eb49c-2ec2-4412-94ba-6c4049ece2eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "829eb49c-2ec2-4412-94ba-6c4049ece2eb" (UID: "829eb49c-2ec2-4412-94ba-6c4049ece2eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 20:21:16 crc kubenswrapper[4828]: I1205 20:21:16.052923 4828 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829eb49c-2ec2-4412-94ba-6c4049ece2eb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 20:21:16 crc kubenswrapper[4828]: I1205 20:21:16.283406 4828 generic.go:334] "Generic (PLEG): container finished" podID="829eb49c-2ec2-4412-94ba-6c4049ece2eb" containerID="b89fd87bec7426a1ba3ae48a001ecc78c5198928e7339ffa62dd71ea6fbba073" exitCode=0 Dec 05 20:21:16 crc kubenswrapper[4828]: I1205 20:21:16.283486 4828 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pcvb4" Dec 05 20:21:16 crc kubenswrapper[4828]: I1205 20:21:16.283475 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pcvb4" event={"ID":"829eb49c-2ec2-4412-94ba-6c4049ece2eb","Type":"ContainerDied","Data":"b89fd87bec7426a1ba3ae48a001ecc78c5198928e7339ffa62dd71ea6fbba073"} Dec 05 20:21:16 crc kubenswrapper[4828]: I1205 20:21:16.283564 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pcvb4" event={"ID":"829eb49c-2ec2-4412-94ba-6c4049ece2eb","Type":"ContainerDied","Data":"eeb6716ca35b1b1edd98c1ef152a2706df33b53a03d198718e809cbaef810832"} Dec 05 20:21:16 crc kubenswrapper[4828]: I1205 20:21:16.283615 4828 scope.go:117] "RemoveContainer" containerID="b89fd87bec7426a1ba3ae48a001ecc78c5198928e7339ffa62dd71ea6fbba073" Dec 05 20:21:16 crc kubenswrapper[4828]: I1205 20:21:16.328747 4828 scope.go:117] "RemoveContainer" containerID="b1008df67af114064a45de51b52bb31c0cbb1d38e838e41598d1028dcd09e3c5" Dec 05 20:21:16 crc kubenswrapper[4828]: I1205 20:21:16.328972 4828 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pcvb4"] Dec 05 20:21:16 crc kubenswrapper[4828]: I1205 20:21:16.338127 4828 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pcvb4"] Dec 05 20:21:16 crc kubenswrapper[4828]: I1205 20:21:16.365548 4828 scope.go:117] "RemoveContainer" containerID="75d3ce7dff894dc4abafa15af28abf543551caef512a5f2350b86d01b6a0ba2a" Dec 05 20:21:16 crc kubenswrapper[4828]: I1205 20:21:16.396375 4828 scope.go:117] "RemoveContainer" containerID="b89fd87bec7426a1ba3ae48a001ecc78c5198928e7339ffa62dd71ea6fbba073" Dec 05 20:21:16 crc kubenswrapper[4828]: E1205 20:21:16.397231 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b89fd87bec7426a1ba3ae48a001ecc78c5198928e7339ffa62dd71ea6fbba073\": container with ID starting with b89fd87bec7426a1ba3ae48a001ecc78c5198928e7339ffa62dd71ea6fbba073 not found: ID does not exist" containerID="b89fd87bec7426a1ba3ae48a001ecc78c5198928e7339ffa62dd71ea6fbba073" Dec 05 20:21:16 crc kubenswrapper[4828]: I1205 20:21:16.397278 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b89fd87bec7426a1ba3ae48a001ecc78c5198928e7339ffa62dd71ea6fbba073"} err="failed to get container status \"b89fd87bec7426a1ba3ae48a001ecc78c5198928e7339ffa62dd71ea6fbba073\": rpc error: code = NotFound desc = could not find container \"b89fd87bec7426a1ba3ae48a001ecc78c5198928e7339ffa62dd71ea6fbba073\": container with ID starting with b89fd87bec7426a1ba3ae48a001ecc78c5198928e7339ffa62dd71ea6fbba073 not found: ID does not exist" Dec 05 20:21:16 crc kubenswrapper[4828]: I1205 20:21:16.397309 4828 scope.go:117] "RemoveContainer" containerID="b1008df67af114064a45de51b52bb31c0cbb1d38e838e41598d1028dcd09e3c5" Dec 05 20:21:16 crc kubenswrapper[4828]: E1205 20:21:16.397788 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1008df67af114064a45de51b52bb31c0cbb1d38e838e41598d1028dcd09e3c5\": container with ID starting with b1008df67af114064a45de51b52bb31c0cbb1d38e838e41598d1028dcd09e3c5 not found: ID does not exist" containerID="b1008df67af114064a45de51b52bb31c0cbb1d38e838e41598d1028dcd09e3c5" Dec 05 20:21:16 crc kubenswrapper[4828]: I1205 20:21:16.397833 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1008df67af114064a45de51b52bb31c0cbb1d38e838e41598d1028dcd09e3c5"} err="failed to get container status \"b1008df67af114064a45de51b52bb31c0cbb1d38e838e41598d1028dcd09e3c5\": rpc error: code = NotFound desc = could not find container \"b1008df67af114064a45de51b52bb31c0cbb1d38e838e41598d1028dcd09e3c5\": container with ID starting with b1008df67af114064a45de51b52bb31c0cbb1d38e838e41598d1028dcd09e3c5 not found: ID does not exist" Dec 05 20:21:16 crc kubenswrapper[4828]: I1205 20:21:16.397873 4828 scope.go:117] "RemoveContainer" containerID="75d3ce7dff894dc4abafa15af28abf543551caef512a5f2350b86d01b6a0ba2a" Dec 05 20:21:16 crc kubenswrapper[4828]: E1205 20:21:16.398160 4828 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75d3ce7dff894dc4abafa15af28abf543551caef512a5f2350b86d01b6a0ba2a\": container with ID starting with 75d3ce7dff894dc4abafa15af28abf543551caef512a5f2350b86d01b6a0ba2a not found: ID does not exist" containerID="75d3ce7dff894dc4abafa15af28abf543551caef512a5f2350b86d01b6a0ba2a" Dec 05 20:21:16 crc kubenswrapper[4828]: I1205 20:21:16.398187 4828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75d3ce7dff894dc4abafa15af28abf543551caef512a5f2350b86d01b6a0ba2a"} err="failed to get container status \"75d3ce7dff894dc4abafa15af28abf543551caef512a5f2350b86d01b6a0ba2a\": rpc error: code = NotFound desc = could not find container \"75d3ce7dff894dc4abafa15af28abf543551caef512a5f2350b86d01b6a0ba2a\": container with ID starting with 75d3ce7dff894dc4abafa15af28abf543551caef512a5f2350b86d01b6a0ba2a not found: ID does not exist" Dec 05 20:21:16 crc kubenswrapper[4828]: I1205 20:21:16.464495 4828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="829eb49c-2ec2-4412-94ba-6c4049ece2eb" path="/var/lib/kubelet/pods/829eb49c-2ec2-4412-94ba-6c4049ece2eb/volumes" Dec 05 20:21:35 crc kubenswrapper[4828]: I1205 20:21:35.761658 4828 scope.go:117] "RemoveContainer" containerID="098f1b7c0dff73829f7b4dba62c3393fcfc157ce0eb0b68f127e8f65bc55aa3a" Dec 05 20:21:35 crc kubenswrapper[4828]: I1205 20:21:35.825811 4828 scope.go:117] "RemoveContainer" containerID="d874424a52e3fb125305f9a0928fa0b98f6ea195697dde036f56bd7bb28d097e" Dec 05 20:21:35 crc kubenswrapper[4828]: I1205 20:21:35.861004 4828 scope.go:117] "RemoveContainer" containerID="33c27644db0734551a36abd0ae9d41934bca826dcea688345c67bc14c5242bdc" Dec 05 20:22:46 crc kubenswrapper[4828]: I1205 20:22:46.306131 4828 generic.go:334] "Generic (PLEG): container finished" podID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd" containerID="6d28ae947c16b71fcce3b34789e59917d8e7cc31bc37e63e6c02a6345d493b6c" exitCode=1 Dec 05 20:22:46 crc kubenswrapper[4828]: I1205 20:22:46.306225 4828 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" event={"ID":"03c4fc5d-6be1-47b4-9c39-7bb86046dafd","Type":"ContainerDied","Data":"6d28ae947c16b71fcce3b34789e59917d8e7cc31bc37e63e6c02a6345d493b6c"} Dec 05 20:22:46 crc kubenswrapper[4828]: I1205 20:22:46.306659 4828 scope.go:117] "RemoveContainer" containerID="669b952c3ede85f0975d95c6d6a646122e5087b1c3258c23ed6e56f0390472d9" Dec 05 20:22:46 crc kubenswrapper[4828]: I1205 20:22:46.307426 4828 scope.go:117] "RemoveContainer" containerID="6d28ae947c16b71fcce3b34789e59917d8e7cc31bc37e63e6c02a6345d493b6c" Dec 05 20:22:46 crc kubenswrapper[4828]: E1205 20:22:46.307808 4828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-575477cdfc-lrhm5_openstack-operators(03c4fc5d-6be1-47b4-9c39-7bb86046dafd)\"" pod="openstack-operators/infra-operator-controller-manager-575477cdfc-lrhm5" podUID="03c4fc5d-6be1-47b4-9c39-7bb86046dafd"